00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 977 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3639 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.145 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.146 The recommended git tool is: git 00:00:00.147 using credential 00000000-0000-0000-0000-000000000002 00:00:00.149 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.200 Fetching changes from the remote Git repository 00:00:00.202 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.244 Using shallow fetch with depth 1 00:00:00.244 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.244 > git --version # timeout=10 00:00:00.277 > git --version # 'git version 2.39.2' 00:00:00.277 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.298 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.298 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.875 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.886 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.897 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.897 > git config core.sparsecheckout # timeout=10 00:00:05.907 > git read-tree -mu HEAD # timeout=10 00:00:05.921 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.937 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.938 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.022 [Pipeline] Start of Pipeline 00:00:06.035 [Pipeline] library 00:00:06.037 Loading library shm_lib@master 00:00:06.037 Library shm_lib@master is cached. Copying from home. 00:00:06.052 [Pipeline] node 00:00:06.064 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.066 [Pipeline] { 00:00:06.074 [Pipeline] catchError 00:00:06.075 [Pipeline] { 00:00:06.083 [Pipeline] wrap 00:00:06.088 [Pipeline] { 00:00:06.094 [Pipeline] stage 00:00:06.095 [Pipeline] { (Prologue) 00:00:06.283 [Pipeline] sh 00:00:06.568 + logger -p user.info -t JENKINS-CI 00:00:06.589 [Pipeline] echo 00:00:06.591 Node: GP11 00:00:06.598 [Pipeline] sh 00:00:06.903 [Pipeline] setCustomBuildProperty 00:00:06.915 [Pipeline] echo 00:00:06.917 Cleanup processes 00:00:06.922 [Pipeline] sh 00:00:07.210 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.210 496195 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.223 [Pipeline] sh 00:00:07.509 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.509 ++ grep -v 'sudo pgrep' 00:00:07.509 ++ awk '{print $1}' 00:00:07.509 + sudo kill -9 00:00:07.509 + true 00:00:07.523 [Pipeline] cleanWs 00:00:07.531 [WS-CLEANUP] Deleting project workspace... 00:00:07.531 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.538 [WS-CLEANUP] done 00:00:07.541 [Pipeline] setCustomBuildProperty 00:00:07.550 [Pipeline] sh 00:00:07.832 + sudo git config --global --replace-all safe.directory '*' 00:00:07.918 [Pipeline] httpRequest 00:00:08.327 [Pipeline] echo 00:00:08.329 Sorcerer 10.211.164.20 is alive 00:00:08.335 [Pipeline] retry 00:00:08.336 [Pipeline] { 00:00:08.345 [Pipeline] httpRequest 00:00:08.350 HttpMethod: GET 00:00:08.350 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.351 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.375 Response Code: HTTP/1.1 200 OK 00:00:08.376 Success: Status code 200 is in the accepted range: 200,404 00:00:08.376 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:31.506 [Pipeline] } 00:00:31.524 [Pipeline] // retry 00:00:31.532 [Pipeline] sh 00:00:31.821 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:31.839 [Pipeline] httpRequest 00:00:32.249 [Pipeline] echo 00:00:32.251 Sorcerer 10.211.164.20 is alive 00:00:32.261 [Pipeline] retry 00:00:32.263 [Pipeline] { 00:00:32.277 [Pipeline] httpRequest 00:00:32.281 HttpMethod: GET 00:00:32.282 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:32.283 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:32.295 Response Code: HTTP/1.1 200 OK 00:00:32.295 Success: Status code 200 is in the accepted range: 200,404 00:00:32.296 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:24.977 [Pipeline] } 00:01:24.991 [Pipeline] // retry 00:01:24.997 [Pipeline] sh 00:01:25.281 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:28.577 [Pipeline] sh 00:01:28.862 + git -C spdk log --oneline -n5 00:01:28.862 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:28.862 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:28.862 4bcab9fb9 correct kick for CQ full case 00:01:28.862 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:28.862 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:28.880 [Pipeline] withCredentials 00:01:28.892 > git --version # timeout=10 00:01:28.906 > git --version # 'git version 2.39.2' 00:01:28.926 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:28.928 [Pipeline] { 00:01:28.936 [Pipeline] retry 00:01:28.938 [Pipeline] { 00:01:28.953 [Pipeline] sh 00:01:29.238 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:29.250 [Pipeline] } 00:01:29.268 [Pipeline] // retry 00:01:29.274 [Pipeline] } 00:01:29.291 [Pipeline] // withCredentials 00:01:29.302 [Pipeline] httpRequest 00:01:29.612 [Pipeline] echo 00:01:29.614 Sorcerer 10.211.164.20 is alive 00:01:29.624 [Pipeline] retry 00:01:29.626 [Pipeline] { 00:01:29.640 [Pipeline] httpRequest 00:01:29.645 HttpMethod: GET 00:01:29.645 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.646 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.650 Response Code: HTTP/1.1 200 OK 00:01:29.651 Success: Status code 200 is in the accepted range: 200,404 00:01:29.651 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:36.745 [Pipeline] } 00:01:36.762 [Pipeline] // retry 00:01:36.769 [Pipeline] sh 00:01:37.063 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:38.990 [Pipeline] sh 00:01:39.277 + git -C dpdk log --oneline -n5 00:01:39.277 eeb0605f11 version: 23.11.0 00:01:39.277 238778122a doc: update release notes for 23.11 00:01:39.277 46aa6b3cfc doc: fix description of RSS features 00:01:39.277 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:39.278 7e421ae345 devtools: support skipping forbid rule check 00:01:39.289 [Pipeline] } 00:01:39.303 [Pipeline] // stage 00:01:39.312 [Pipeline] stage 00:01:39.314 [Pipeline] { (Prepare) 00:01:39.335 [Pipeline] writeFile 00:01:39.352 [Pipeline] sh 00:01:39.642 + logger -p user.info -t JENKINS-CI 00:01:39.656 [Pipeline] sh 00:01:39.944 + logger -p user.info -t JENKINS-CI 00:01:39.957 [Pipeline] sh 00:01:40.245 + cat autorun-spdk.conf 00:01:40.245 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.245 SPDK_TEST_NVMF=1 00:01:40.245 SPDK_TEST_NVME_CLI=1 00:01:40.245 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.245 SPDK_TEST_NVMF_NICS=e810 00:01:40.245 SPDK_TEST_VFIOUSER=1 00:01:40.245 SPDK_RUN_UBSAN=1 00:01:40.245 NET_TYPE=phy 00:01:40.245 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:40.245 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.253 RUN_NIGHTLY=1 00:01:40.258 [Pipeline] readFile 00:01:40.276 [Pipeline] withEnv 00:01:40.278 [Pipeline] { 00:01:40.289 [Pipeline] sh 00:01:40.579 + set -ex 00:01:40.579 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:40.579 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:40.579 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.579 ++ SPDK_TEST_NVMF=1 00:01:40.579 ++ SPDK_TEST_NVME_CLI=1 00:01:40.579 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.579 ++ SPDK_TEST_NVMF_NICS=e810 00:01:40.579 ++ SPDK_TEST_VFIOUSER=1 00:01:40.579 ++ SPDK_RUN_UBSAN=1 00:01:40.579 ++ NET_TYPE=phy 00:01:40.579 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:40.579 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:40.579 ++ RUN_NIGHTLY=1 00:01:40.579 + case $SPDK_TEST_NVMF_NICS in 00:01:40.579 + DRIVERS=ice 00:01:40.579 + [[ tcp == \r\d\m\a ]] 00:01:40.579 + [[ -n ice ]] 00:01:40.579 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:40.579 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:40.579 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:40.579 rmmod: ERROR: Module irdma is not currently loaded 00:01:40.579 rmmod: ERROR: Module i40iw is not currently loaded 00:01:40.579 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:40.579 + true 00:01:40.579 + for D in $DRIVERS 00:01:40.579 + sudo modprobe ice 00:01:40.579 + exit 0 00:01:40.590 [Pipeline] } 00:01:40.606 [Pipeline] // withEnv 00:01:40.611 [Pipeline] } 00:01:40.624 [Pipeline] // stage 00:01:40.635 [Pipeline] catchError 00:01:40.636 [Pipeline] { 00:01:40.649 [Pipeline] timeout 00:01:40.649 Timeout set to expire in 1 hr 0 min 00:01:40.651 [Pipeline] { 00:01:40.663 [Pipeline] stage 00:01:40.664 [Pipeline] { (Tests) 00:01:40.677 [Pipeline] sh 00:01:40.963 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.963 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.963 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.963 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:40.963 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.963 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:40.963 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:40.963 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:40.963 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:40.963 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:40.963 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:40.963 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.963 + source /etc/os-release 00:01:40.963 ++ NAME='Fedora Linux' 00:01:40.963 ++ VERSION='39 (Cloud Edition)' 00:01:40.963 ++ ID=fedora 00:01:40.963 ++ VERSION_ID=39 00:01:40.963 ++ VERSION_CODENAME= 00:01:40.963 ++ PLATFORM_ID=platform:f39 00:01:40.963 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:40.963 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:40.963 ++ LOGO=fedora-logo-icon 00:01:40.963 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:40.963 ++ HOME_URL=https://fedoraproject.org/ 00:01:40.963 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:40.963 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:40.963 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:40.963 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:40.963 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:40.963 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:40.963 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:40.963 ++ SUPPORT_END=2024-11-12 00:01:40.963 ++ VARIANT='Cloud Edition' 00:01:40.963 ++ VARIANT_ID=cloud 00:01:40.963 + uname -a 00:01:40.963 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:40.963 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:41.902 Hugepages 00:01:41.902 node hugesize free / total 00:01:42.163 node0 1048576kB 0 / 0 00:01:42.163 node0 2048kB 0 / 0 00:01:42.163 node1 1048576kB 0 / 0 00:01:42.163 node1 2048kB 0 / 0 00:01:42.163 00:01:42.163 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:42.163 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:42.163 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:42.163 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:42.163 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:42.163 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:42.163 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:42.163 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:42.163 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:42.163 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:42.163 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:42.163 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:42.163 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:42.163 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:42.163 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:42.163 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:42.163 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:42.163 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:42.163 + rm -f /tmp/spdk-ld-path 00:01:42.163 + source autorun-spdk.conf 00:01:42.163 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.163 ++ SPDK_TEST_NVMF=1 00:01:42.163 ++ SPDK_TEST_NVME_CLI=1 00:01:42.163 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.163 ++ SPDK_TEST_NVMF_NICS=e810 00:01:42.163 ++ SPDK_TEST_VFIOUSER=1 00:01:42.163 ++ SPDK_RUN_UBSAN=1 00:01:42.164 ++ NET_TYPE=phy 00:01:42.164 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:42.164 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.164 ++ RUN_NIGHTLY=1 00:01:42.164 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:42.164 + [[ -n '' ]] 00:01:42.164 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.164 + for M in /var/spdk/build-*-manifest.txt 00:01:42.164 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:42.164 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:42.164 + for M in /var/spdk/build-*-manifest.txt 00:01:42.164 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:42.164 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:42.164 + for M in /var/spdk/build-*-manifest.txt 00:01:42.164 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:42.164 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:42.164 ++ uname 00:01:42.164 + [[ Linux == \L\i\n\u\x ]] 00:01:42.164 + sudo dmesg -T 00:01:42.164 + sudo dmesg --clear 00:01:42.164 + dmesg_pid=497531 00:01:42.164 + [[ Fedora Linux == FreeBSD ]] 00:01:42.164 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:42.164 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:42.164 + sudo dmesg -Tw 00:01:42.164 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:42.164 + [[ -x /usr/src/fio-static/fio ]] 00:01:42.164 + export FIO_BIN=/usr/src/fio-static/fio 00:01:42.164 + FIO_BIN=/usr/src/fio-static/fio 00:01:42.164 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:42.164 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:42.164 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:42.164 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:42.164 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:42.164 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:42.164 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:42.164 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:42.164 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:42.164 18:22:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:42.164 18:22:28 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:42.164 18:22:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.164 18:22:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:42.164 18:22:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:42.164 18:22:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.164 18:22:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:42.164 18:22:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:42.164 18:22:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:42.164 18:22:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:42.164 18:22:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:42.164 18:22:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.164 18:22:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:42.164 18:22:28 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:42.164 18:22:28 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:42.423 18:22:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:42.423 18:22:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:42.423 18:22:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:42.423 18:22:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:42.423 18:22:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:42.423 18:22:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:42.423 18:22:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.423 18:22:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.423 18:22:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.423 18:22:28 -- paths/export.sh@5 -- $ export PATH 00:01:42.423 18:22:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.423 18:22:28 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:42.423 18:22:28 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:42.423 18:22:28 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731864148.XXXXXX 00:01:42.423 18:22:28 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731864148.Ps7W6h 00:01:42.423 18:22:28 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:42.423 18:22:28 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:01:42.423 18:22:28 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.423 18:22:28 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:42.423 18:22:28 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:42.423 18:22:28 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:42.423 18:22:28 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:42.423 18:22:28 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:42.423 18:22:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.423 18:22:28 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:42.423 18:22:28 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:42.423 18:22:28 -- pm/common@17 -- $ local monitor 00:01:42.423 18:22:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.423 18:22:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.423 18:22:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.423 18:22:28 -- pm/common@21 -- $ date +%s 00:01:42.423 18:22:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.423 18:22:28 -- pm/common@21 -- $ date +%s 00:01:42.423 18:22:28 -- pm/common@25 -- $ sleep 1 00:01:42.423 18:22:28 -- pm/common@21 -- $ date +%s 00:01:42.423 18:22:28 -- pm/common@21 -- $ date +%s 00:01:42.424 18:22:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731864148 00:01:42.424 18:22:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731864148 00:01:42.424 18:22:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731864148 00:01:42.424 18:22:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731864148 00:01:42.424 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731864148_collect-vmstat.pm.log 00:01:42.424 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731864148_collect-cpu-load.pm.log 00:01:42.424 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731864148_collect-cpu-temp.pm.log 00:01:42.424 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731864148_collect-bmc-pm.bmc.pm.log 00:01:43.364 18:22:29 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:43.364 18:22:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:43.364 18:22:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:43.364 18:22:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.364 18:22:29 -- spdk/autobuild.sh@16 -- $ date -u 00:01:43.364 Sun Nov 17 05:22:29 PM UTC 2024 00:01:43.364 18:22:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:43.364 v25.01-pre-189-g83e8405e4 00:01:43.364 18:22:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:43.364 18:22:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:43.364 18:22:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:43.364 18:22:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:43.364 18:22:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:43.364 18:22:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.364 ************************************ 00:01:43.364 START TEST ubsan 00:01:43.364 ************************************ 00:01:43.365 18:22:29 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:43.365 using ubsan 00:01:43.365 00:01:43.365 real 0m0.000s 00:01:43.365 user 0m0.000s 00:01:43.365 sys 0m0.000s 00:01:43.365 18:22:29 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:43.365 18:22:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:43.365 ************************************ 00:01:43.365 END TEST ubsan 00:01:43.365 ************************************ 00:01:43.365 18:22:29 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:43.365 18:22:29 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:43.365 18:22:29 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:43.365 18:22:29 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:43.365 18:22:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:43.365 18:22:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.365 ************************************ 00:01:43.365 START TEST build_native_dpdk 00:01:43.365 ************************************ 00:01:43.365 18:22:29 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:43.365 eeb0605f11 version: 23.11.0 00:01:43.365 238778122a doc: update release notes for 23.11 00:01:43.365 46aa6b3cfc doc: fix description of RSS features 00:01:43.365 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:43.365 7e421ae345 devtools: support skipping forbid rule check 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:43.365 patching file config/rte_config.h 00:01:43.365 Hunk #1 succeeded at 60 (offset 1 line). 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:43.365 patching file lib/pcapng/rte_pcapng.c 00:01:43.365 18:22:29 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:43.365 18:22:29 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:43.366 18:22:29 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:43.366 18:22:29 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:43.366 18:22:29 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:43.366 18:22:29 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:43.366 18:22:29 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:43.366 18:22:29 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:48.651 The Meson build system 00:01:48.651 Version: 1.5.0 00:01:48.651 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:48.651 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:48.651 Build type: native build 00:01:48.651 Program cat found: YES (/usr/bin/cat) 00:01:48.651 Project name: DPDK 00:01:48.651 Project version: 23.11.0 00:01:48.651 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:48.651 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:48.651 Host machine cpu family: x86_64 00:01:48.651 Host machine cpu: x86_64 00:01:48.651 Message: ## Building in Developer Mode ## 00:01:48.651 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.651 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:48.651 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.651 Program python3 found: YES (/usr/bin/python3) 00:01:48.651 Program cat found: YES (/usr/bin/cat) 00:01:48.651 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:48.651 Compiler for C supports arguments -march=native: YES 00:01:48.651 Checking for size of "void *" : 8 00:01:48.651 Checking for size of "void *" : 8 (cached) 00:01:48.651 Library m found: YES 00:01:48.651 Library numa found: YES 00:01:48.651 Has header "numaif.h" : YES 00:01:48.651 Library fdt found: NO 00:01:48.651 Library execinfo found: NO 00:01:48.651 Has header "execinfo.h" : YES 00:01:48.651 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:48.651 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.651 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.651 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.651 Run-time dependency openssl found: YES 3.1.1 00:01:48.651 Run-time dependency libpcap found: YES 1.10.4 00:01:48.651 Has header "pcap.h" with dependency libpcap: YES 00:01:48.651 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.651 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.651 Compiler for C supports arguments -Wformat: YES 00:01:48.651 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.651 Compiler for C supports arguments -Wformat-security: NO 00:01:48.651 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.651 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.651 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.651 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.651 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.651 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.652 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.652 Compiler for C supports arguments -Wundef: YES 00:01:48.652 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.652 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.652 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.652 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.652 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.652 Program objdump found: YES (/usr/bin/objdump) 00:01:48.652 Compiler for C supports arguments -mavx512f: YES 00:01:48.652 Checking if "AVX512 checking" compiles: YES 00:01:48.652 Fetching value of define "__SSE4_2__" : 1 00:01:48.652 Fetching value of define "__AES__" : 1 00:01:48.652 Fetching value of define "__AVX__" : 1 00:01:48.652 Fetching value of define "__AVX2__" : (undefined) 00:01:48.652 Fetching value of define "__AVX512BW__" : (undefined) 00:01:48.652 Fetching value of define "__AVX512CD__" : (undefined) 00:01:48.652 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:48.652 Fetching value of define "__AVX512F__" : (undefined) 00:01:48.652 Fetching value of define "__AVX512VL__" : (undefined) 00:01:48.652 Fetching value of define "__PCLMUL__" : 1 00:01:48.652 Fetching value of define "__RDRND__" : 1 00:01:48.652 Fetching value of define "__RDSEED__" : (undefined) 00:01:48.652 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.652 Fetching value of define "__znver1__" : (undefined) 00:01:48.652 Fetching value of define "__znver2__" : (undefined) 00:01:48.652 Fetching value of define "__znver3__" : (undefined) 00:01:48.652 Fetching value of define "__znver4__" : (undefined) 00:01:48.652 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.652 Message: lib/log: Defining dependency "log" 00:01:48.652 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.652 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.652 Checking for function "getentropy" : NO 00:01:48.652 Message: lib/eal: Defining dependency "eal" 00:01:48.652 Message: lib/ring: Defining dependency "ring" 00:01:48.652 Message: lib/rcu: Defining dependency "rcu" 00:01:48.652 Message: lib/mempool: Defining dependency "mempool" 00:01:48.652 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.652 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.652 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.652 Compiler for C supports arguments -mpclmul: YES 00:01:48.652 Compiler for C supports arguments -maes: YES 00:01:48.652 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.652 Compiler for C supports arguments -mavx512bw: YES 00:01:48.652 Compiler for C supports arguments -mavx512dq: YES 00:01:48.652 Compiler for C supports arguments -mavx512vl: YES 00:01:48.652 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.652 Compiler for C supports arguments -mavx2: YES 00:01:48.652 Compiler for C supports arguments -mavx: YES 00:01:48.652 Message: lib/net: Defining dependency "net" 00:01:48.652 Message: lib/meter: Defining dependency "meter" 00:01:48.652 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.652 Message: lib/pci: Defining dependency "pci" 00:01:48.652 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.652 Message: lib/metrics: Defining dependency "metrics" 00:01:48.652 Message: lib/hash: Defining dependency "hash" 00:01:48.652 Message: lib/timer: Defining dependency "timer" 00:01:48.652 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.652 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:48.652 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:48.652 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:48.652 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:48.652 Message: lib/acl: Defining dependency "acl" 00:01:48.652 Message: lib/bbdev: Defining dependency "bbdev" 00:01:48.652 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:48.652 Run-time dependency libelf found: YES 0.191 00:01:48.652 Message: lib/bpf: Defining dependency "bpf" 00:01:48.652 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:48.652 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.652 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.652 Message: lib/distributor: Defining dependency "distributor" 00:01:48.652 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.652 Message: lib/efd: Defining dependency "efd" 00:01:48.652 Message: lib/eventdev: Defining dependency "eventdev" 00:01:48.652 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:48.652 Message: lib/gpudev: Defining dependency "gpudev" 00:01:48.652 Message: lib/gro: Defining dependency "gro" 00:01:48.652 Message: lib/gso: Defining dependency "gso" 00:01:48.652 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:48.652 Message: lib/jobstats: Defining dependency "jobstats" 00:01:48.652 Message: lib/latencystats: Defining dependency "latencystats" 00:01:48.652 Message: lib/lpm: Defining dependency "lpm" 00:01:48.652 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.652 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:48.652 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:48.652 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:48.652 Message: lib/member: Defining dependency "member" 00:01:48.652 Message: lib/pcapng: Defining dependency "pcapng" 00:01:48.652 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.652 Message: lib/power: Defining dependency "power" 00:01:48.652 Message: lib/rawdev: Defining dependency "rawdev" 00:01:48.652 Message: lib/regexdev: Defining dependency "regexdev" 00:01:48.652 Message: lib/mldev: Defining dependency "mldev" 00:01:48.652 Message: lib/rib: Defining dependency "rib" 00:01:48.652 Message: lib/reorder: Defining dependency "reorder" 00:01:48.652 Message: lib/sched: Defining dependency "sched" 00:01:48.652 Message: lib/security: Defining dependency "security" 00:01:48.652 Message: lib/stack: Defining dependency "stack" 00:01:48.652 Has header "linux/userfaultfd.h" : YES 00:01:48.652 Has header "linux/vduse.h" : YES 00:01:48.652 Message: lib/vhost: Defining dependency "vhost" 00:01:48.652 Message: lib/ipsec: Defining dependency "ipsec" 00:01:48.652 Message: lib/pdcp: Defining dependency "pdcp" 00:01:48.652 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.652 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:48.652 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:48.652 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:48.652 Message: lib/fib: Defining dependency "fib" 00:01:48.652 Message: lib/port: Defining dependency "port" 00:01:48.652 Message: lib/pdump: Defining dependency "pdump" 00:01:48.652 Message: lib/table: Defining dependency "table" 00:01:48.652 Message: lib/pipeline: Defining dependency "pipeline" 00:01:48.652 Message: lib/graph: Defining dependency "graph" 00:01:48.652 Message: lib/node: Defining dependency "node" 00:01:50.038 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.038 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.038 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.038 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.038 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:50.038 Compiler for C supports arguments -Wno-unused-value: YES 00:01:50.038 Compiler for C supports arguments -Wno-format: YES 00:01:50.038 Compiler for C supports arguments -Wno-format-security: YES 00:01:50.038 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:50.038 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:50.038 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:50.039 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:50.039 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:50.039 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.039 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:50.039 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:50.039 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:50.039 Has header "sys/epoll.h" : YES 00:01:50.039 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:50.039 Configuring doxy-api-html.conf using configuration 00:01:50.039 Configuring doxy-api-man.conf using configuration 00:01:50.039 Program mandb found: YES (/usr/bin/mandb) 00:01:50.039 Program sphinx-build found: NO 00:01:50.039 Configuring rte_build_config.h using configuration 00:01:50.039 Message: 00:01:50.039 ================= 00:01:50.039 Applications Enabled 00:01:50.039 ================= 00:01:50.039 00:01:50.039 apps: 00:01:50.039 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:50.039 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:50.039 test-pmd, test-regex, test-sad, test-security-perf, 00:01:50.039 00:01:50.039 Message: 00:01:50.039 ================= 00:01:50.039 Libraries Enabled 00:01:50.039 ================= 00:01:50.039 00:01:50.039 libs: 00:01:50.039 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.039 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:50.039 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:50.039 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:50.039 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:50.039 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:50.039 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:50.039 00:01:50.039 00:01:50.039 Message: 00:01:50.039 =============== 00:01:50.039 Drivers Enabled 00:01:50.039 =============== 00:01:50.039 00:01:50.039 common: 00:01:50.039 00:01:50.039 bus: 00:01:50.039 pci, vdev, 00:01:50.039 mempool: 00:01:50.039 ring, 00:01:50.039 dma: 00:01:50.039 00:01:50.039 net: 00:01:50.039 i40e, 00:01:50.039 raw: 00:01:50.039 00:01:50.039 crypto: 00:01:50.039 00:01:50.039 compress: 00:01:50.039 00:01:50.039 regex: 00:01:50.039 00:01:50.039 ml: 00:01:50.039 00:01:50.039 vdpa: 00:01:50.039 00:01:50.039 event: 00:01:50.039 00:01:50.039 baseband: 00:01:50.039 00:01:50.039 gpu: 00:01:50.039 00:01:50.039 00:01:50.039 Message: 00:01:50.039 ================= 00:01:50.039 Content Skipped 00:01:50.039 ================= 00:01:50.039 00:01:50.039 apps: 00:01:50.039 00:01:50.039 libs: 00:01:50.039 00:01:50.039 drivers: 00:01:50.039 common/cpt: not in enabled drivers build config 00:01:50.039 common/dpaax: not in enabled drivers build config 00:01:50.039 common/iavf: not in enabled drivers build config 00:01:50.039 common/idpf: not in enabled drivers build config 00:01:50.039 common/mvep: not in enabled drivers build config 00:01:50.039 common/octeontx: not in enabled drivers build config 00:01:50.039 bus/auxiliary: not in enabled drivers build config 00:01:50.039 bus/cdx: not in enabled drivers build config 00:01:50.039 bus/dpaa: not in enabled drivers build config 00:01:50.039 bus/fslmc: not in enabled drivers build config 00:01:50.039 bus/ifpga: not in enabled drivers build config 00:01:50.039 bus/platform: not in enabled drivers build config 00:01:50.039 bus/vmbus: not in enabled drivers build config 00:01:50.039 common/cnxk: not in enabled drivers build config 00:01:50.039 common/mlx5: not in enabled drivers build config 00:01:50.039 common/nfp: not in enabled drivers build config 00:01:50.039 common/qat: not in enabled drivers build config 00:01:50.039 common/sfc_efx: not in enabled drivers build config 00:01:50.039 mempool/bucket: not in enabled drivers build config 00:01:50.039 mempool/cnxk: not in enabled drivers build config 00:01:50.039 mempool/dpaa: not in enabled drivers build config 00:01:50.039 mempool/dpaa2: not in enabled drivers build config 00:01:50.039 mempool/octeontx: not in enabled drivers build config 00:01:50.039 mempool/stack: not in enabled drivers build config 00:01:50.039 dma/cnxk: not in enabled drivers build config 00:01:50.039 dma/dpaa: not in enabled drivers build config 00:01:50.039 dma/dpaa2: not in enabled drivers build config 00:01:50.039 dma/hisilicon: not in enabled drivers build config 00:01:50.039 dma/idxd: not in enabled drivers build config 00:01:50.039 dma/ioat: not in enabled drivers build config 00:01:50.039 dma/skeleton: not in enabled drivers build config 00:01:50.039 net/af_packet: not in enabled drivers build config 00:01:50.039 net/af_xdp: not in enabled drivers build config 00:01:50.039 net/ark: not in enabled drivers build config 00:01:50.039 net/atlantic: not in enabled drivers build config 00:01:50.039 net/avp: not in enabled drivers build config 00:01:50.039 net/axgbe: not in enabled drivers build config 00:01:50.039 net/bnx2x: not in enabled drivers build config 00:01:50.039 net/bnxt: not in enabled drivers build config 00:01:50.039 net/bonding: not in enabled drivers build config 00:01:50.039 net/cnxk: not in enabled drivers build config 00:01:50.039 net/cpfl: not in enabled drivers build config 00:01:50.039 net/cxgbe: not in enabled drivers build config 00:01:50.039 net/dpaa: not in enabled drivers build config 00:01:50.039 net/dpaa2: not in enabled drivers build config 00:01:50.039 net/e1000: not in enabled drivers build config 00:01:50.039 net/ena: not in enabled drivers build config 00:01:50.039 net/enetc: not in enabled drivers build config 00:01:50.039 net/enetfec: not in enabled drivers build config 00:01:50.039 net/enic: not in enabled drivers build config 00:01:50.039 net/failsafe: not in enabled drivers build config 00:01:50.039 net/fm10k: not in enabled drivers build config 00:01:50.039 net/gve: not in enabled drivers build config 00:01:50.039 net/hinic: not in enabled drivers build config 00:01:50.039 net/hns3: not in enabled drivers build config 00:01:50.039 net/iavf: not in enabled drivers build config 00:01:50.039 net/ice: not in enabled drivers build config 00:01:50.039 net/idpf: not in enabled drivers build config 00:01:50.039 net/igc: not in enabled drivers build config 00:01:50.039 net/ionic: not in enabled drivers build config 00:01:50.039 net/ipn3ke: not in enabled drivers build config 00:01:50.039 net/ixgbe: not in enabled drivers build config 00:01:50.039 net/mana: not in enabled drivers build config 00:01:50.039 net/memif: not in enabled drivers build config 00:01:50.039 net/mlx4: not in enabled drivers build config 00:01:50.039 net/mlx5: not in enabled drivers build config 00:01:50.039 net/mvneta: not in enabled drivers build config 00:01:50.039 net/mvpp2: not in enabled drivers build config 00:01:50.039 net/netvsc: not in enabled drivers build config 00:01:50.039 net/nfb: not in enabled drivers build config 00:01:50.039 net/nfp: not in enabled drivers build config 00:01:50.039 net/ngbe: not in enabled drivers build config 00:01:50.039 net/null: not in enabled drivers build config 00:01:50.039 net/octeontx: not in enabled drivers build config 00:01:50.039 net/octeon_ep: not in enabled drivers build config 00:01:50.039 net/pcap: not in enabled drivers build config 00:01:50.039 net/pfe: not in enabled drivers build config 00:01:50.039 net/qede: not in enabled drivers build config 00:01:50.039 net/ring: not in enabled drivers build config 00:01:50.039 net/sfc: not in enabled drivers build config 00:01:50.039 net/softnic: not in enabled drivers build config 00:01:50.039 net/tap: not in enabled drivers build config 00:01:50.039 net/thunderx: not in enabled drivers build config 00:01:50.039 net/txgbe: not in enabled drivers build config 00:01:50.039 net/vdev_netvsc: not in enabled drivers build config 00:01:50.039 net/vhost: not in enabled drivers build config 00:01:50.039 net/virtio: not in enabled drivers build config 00:01:50.039 net/vmxnet3: not in enabled drivers build config 00:01:50.039 raw/cnxk_bphy: not in enabled drivers build config 00:01:50.039 raw/cnxk_gpio: not in enabled drivers build config 00:01:50.039 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:50.039 raw/ifpga: not in enabled drivers build config 00:01:50.039 raw/ntb: not in enabled drivers build config 00:01:50.039 raw/skeleton: not in enabled drivers build config 00:01:50.039 crypto/armv8: not in enabled drivers build config 00:01:50.039 crypto/bcmfs: not in enabled drivers build config 00:01:50.039 crypto/caam_jr: not in enabled drivers build config 00:01:50.039 crypto/ccp: not in enabled drivers build config 00:01:50.039 crypto/cnxk: not in enabled drivers build config 00:01:50.039 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.039 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.039 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.039 crypto/mlx5: not in enabled drivers build config 00:01:50.039 crypto/mvsam: not in enabled drivers build config 00:01:50.039 crypto/nitrox: not in enabled drivers build config 00:01:50.039 crypto/null: not in enabled drivers build config 00:01:50.039 crypto/octeontx: not in enabled drivers build config 00:01:50.039 crypto/openssl: not in enabled drivers build config 00:01:50.039 crypto/scheduler: not in enabled drivers build config 00:01:50.039 crypto/uadk: not in enabled drivers build config 00:01:50.039 crypto/virtio: not in enabled drivers build config 00:01:50.039 compress/isal: not in enabled drivers build config 00:01:50.039 compress/mlx5: not in enabled drivers build config 00:01:50.039 compress/octeontx: not in enabled drivers build config 00:01:50.039 compress/zlib: not in enabled drivers build config 00:01:50.039 regex/mlx5: not in enabled drivers build config 00:01:50.039 regex/cn9k: not in enabled drivers build config 00:01:50.039 ml/cnxk: not in enabled drivers build config 00:01:50.039 vdpa/ifc: not in enabled drivers build config 00:01:50.039 vdpa/mlx5: not in enabled drivers build config 00:01:50.039 vdpa/nfp: not in enabled drivers build config 00:01:50.039 vdpa/sfc: not in enabled drivers build config 00:01:50.039 event/cnxk: not in enabled drivers build config 00:01:50.040 event/dlb2: not in enabled drivers build config 00:01:50.040 event/dpaa: not in enabled drivers build config 00:01:50.040 event/dpaa2: not in enabled drivers build config 00:01:50.040 event/dsw: not in enabled drivers build config 00:01:50.040 event/opdl: not in enabled drivers build config 00:01:50.040 event/skeleton: not in enabled drivers build config 00:01:50.040 event/sw: not in enabled drivers build config 00:01:50.040 event/octeontx: not in enabled drivers build config 00:01:50.040 baseband/acc: not in enabled drivers build config 00:01:50.040 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:50.040 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:50.040 baseband/la12xx: not in enabled drivers build config 00:01:50.040 baseband/null: not in enabled drivers build config 00:01:50.040 baseband/turbo_sw: not in enabled drivers build config 00:01:50.040 gpu/cuda: not in enabled drivers build config 00:01:50.040 00:01:50.040 00:01:50.040 Build targets in project: 220 00:01:50.040 00:01:50.040 DPDK 23.11.0 00:01:50.040 00:01:50.040 User defined options 00:01:50.040 libdir : lib 00:01:50.040 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:50.040 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:50.040 c_link_args : 00:01:50.040 enable_docs : false 00:01:50.040 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:50.040 enable_kmods : false 00:01:50.040 machine : native 00:01:50.040 tests : false 00:01:50.040 00:01:50.040 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:50.040 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:50.040 18:22:36 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:50.040 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:50.040 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:50.040 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:50.040 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:50.040 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:50.040 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:50.040 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:50.303 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:50.304 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:50.304 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:50.304 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:50.304 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:50.304 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:50.304 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:50.304 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:50.304 [15/710] Linking static target lib/librte_kvargs.a 00:01:50.304 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:50.304 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:50.304 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:50.304 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:50.304 [20/710] Linking static target lib/librte_log.a 00:01:50.565 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:50.565 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.142 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.142 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.142 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.142 [26/710] Linking target lib/librte_log.so.24.0 00:01:51.142 [27/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.142 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.142 [29/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.142 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.142 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.142 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.142 [33/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.142 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.142 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.402 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.402 [37/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.402 [38/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.402 [39/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.402 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.402 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.402 [42/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.402 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.402 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.402 [45/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.402 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.402 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.402 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.402 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.402 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.402 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.402 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.402 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.402 [54/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:51.402 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.402 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.402 [57/710] Linking target lib/librte_kvargs.so.24.0 00:01:51.403 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.403 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.403 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.403 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.664 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.664 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.664 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.664 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:51.926 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.926 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.926 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.926 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.926 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.926 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.926 [72/710] Linking static target lib/librte_pci.a 00:01:52.190 [73/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.190 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.190 [75/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.190 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.190 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.190 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.459 [79/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.459 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.459 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.459 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.459 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.459 [84/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.459 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.459 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.459 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.459 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.459 [89/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:52.459 [90/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.459 [91/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:52.459 [92/710] Linking static target lib/librte_ring.a 00:01:52.459 [93/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.459 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.459 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.719 [96/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.719 [97/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.719 [98/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.719 [99/710] Linking static target lib/librte_meter.a 00:01:52.719 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.719 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.719 [102/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.719 [103/710] Linking static target lib/librte_telemetry.a 00:01:52.719 [104/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.719 [105/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.719 [106/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.719 [107/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.719 [108/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.719 [109/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.983 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.983 [111/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.983 [112/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.983 [113/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.983 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.983 [115/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.983 [116/710] Linking static target lib/librte_eal.a 00:01:52.983 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.983 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.983 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.983 [120/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.254 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.254 [122/710] Linking static target lib/librte_net.a 00:01:53.254 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.254 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:53.254 [125/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.254 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.254 [127/710] Linking static target lib/librte_cmdline.a 00:01:53.542 [128/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:53.542 [129/710] Linking static target lib/librte_mempool.a 00:01:53.542 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.542 [131/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:53.542 [132/710] Linking static target lib/librte_cfgfile.a 00:01:53.542 [133/710] Linking target lib/librte_telemetry.so.24.0 00:01:53.542 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.542 [135/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.542 [136/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.542 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:53.542 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:53.814 [139/710] Linking static target lib/librte_metrics.a 00:01:53.814 [140/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:53.814 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.814 [142/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:53.814 [143/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:53.814 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:54.082 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:54.082 [146/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:54.082 [147/710] Linking static target lib/librte_bitratestats.a 00:01:54.082 [148/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.082 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:54.082 [150/710] Linking static target lib/librte_rcu.a 00:01:54.082 [151/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.082 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:54.082 [153/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:54.082 [154/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:54.082 [155/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:54.346 [156/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:54.346 [157/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.346 [158/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:54.346 [159/710] Linking static target lib/librte_timer.a 00:01:54.346 [160/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:54.346 [161/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.346 [162/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:54.346 [163/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.346 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.609 [165/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.609 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:54.609 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:54.609 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:54.609 [169/710] Linking static target lib/librte_bbdev.a 00:01:54.609 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:54.873 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.873 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:54.873 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:54.873 [174/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:54.873 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:54.873 [176/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.873 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:54.873 [178/710] Linking static target lib/librte_compressdev.a 00:01:54.873 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:55.136 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:55.136 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:55.136 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:55.401 [183/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.401 [184/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:55.401 [185/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:55.401 [186/710] Linking static target lib/librte_distributor.a 00:01:55.664 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.664 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:55.664 [189/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.664 [190/710] Linking static target lib/librte_bpf.a 00:01:55.664 [191/710] Linking static target lib/librte_dmadev.a 00:01:55.664 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:55.664 [193/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.664 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:55.928 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:55.928 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:55.928 [197/710] Linking static target lib/librte_dispatcher.a 00:01:55.928 [198/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:55.928 [199/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.928 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:55.928 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:55.928 [202/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:55.928 [203/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:55.928 [204/710] Linking static target lib/librte_gpudev.a 00:01:55.928 [205/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:55.928 [206/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:56.193 [207/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:56.193 [208/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:56.193 [209/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:56.193 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:56.193 [211/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.193 [212/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:56.193 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:56.193 [214/710] Linking static target lib/librte_gro.a 00:01:56.193 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:56.193 [216/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:56.193 [217/710] Linking static target lib/librte_jobstats.a 00:01:56.193 [218/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.459 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:56.459 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:56.459 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:56.459 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.459 [223/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.722 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:56.722 [225/710] Linking static target lib/librte_latencystats.a 00:01:56.722 [226/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:56.722 [227/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.722 [228/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:56.722 [229/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:56.722 [230/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:56.722 [231/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:56.986 [232/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:56.986 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:56.986 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:56.986 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:56.986 [236/710] Linking static target lib/librte_ip_frag.a 00:01:56.986 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:57.251 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.251 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:57.251 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:57.251 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:57.519 [242/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:57.519 [243/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:57.519 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:57.519 [245/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.519 [246/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.519 [247/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:57.519 [248/710] Linking static target lib/librte_gso.a 00:01:57.519 [249/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:57.781 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:57.781 [251/710] Linking static target lib/librte_regexdev.a 00:01:57.781 [252/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:57.781 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:57.781 [254/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:57.781 [255/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:57.781 [256/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:57.781 [257/710] Linking static target lib/librte_rawdev.a 00:01:57.781 [258/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.781 [259/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:58.047 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:58.047 [261/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:58.047 [262/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:58.047 [263/710] Linking static target lib/librte_efd.a 00:01:58.047 [264/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:58.047 [265/710] Linking static target lib/librte_mldev.a 00:01:58.047 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:58.047 [267/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:58.047 [268/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:58.047 [269/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:58.047 [270/710] Linking static target lib/librte_pcapng.a 00:01:58.047 [271/710] Linking static target lib/acl/libavx2_tmp.a 00:01:58.310 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:58.310 [273/710] Linking static target lib/librte_stack.a 00:01:58.310 [274/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:58.310 [275/710] Linking static target lib/librte_lpm.a 00:01:58.310 [276/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:58.310 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.310 [278/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.575 [279/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.575 [280/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.575 [281/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.575 [282/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:58.575 [283/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.575 [284/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.575 [285/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.575 [286/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:58.575 [287/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:58.575 [288/710] Linking static target lib/acl/libavx512_tmp.a 00:01:58.575 [289/710] Linking static target lib/librte_hash.a 00:01:58.575 [290/710] Linking static target lib/librte_acl.a 00:01:58.838 [291/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.838 [292/710] Linking static target lib/librte_reorder.a 00:01:58.838 [293/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.838 [294/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.838 [295/710] Linking static target lib/librte_power.a 00:01:58.838 [296/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.838 [297/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:58.838 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:58.838 [299/710] Linking static target lib/librte_security.a 00:01:58.838 [300/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.122 [301/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.123 [302/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:59.123 [303/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.123 [304/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.123 [305/710] Linking static target lib/librte_mbuf.a 00:01:59.123 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.123 [307/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.400 [308/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:59.400 [309/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.400 [310/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:59.400 [311/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:59.400 [312/710] Linking static target lib/librte_rib.a 00:01:59.400 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:59.400 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:59.400 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:59.400 [316/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:59.685 [317/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.685 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.685 [319/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:59.685 [320/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:59.685 [321/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:59.685 [322/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:59.685 [323/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:59.685 [324/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:59.685 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:59.685 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.957 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.957 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.957 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.957 [330/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.222 [331/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:00.222 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:00.222 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:00.222 [334/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:00.222 [335/710] Linking static target lib/librte_member.a 00:02:00.487 [336/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:00.487 [337/710] Linking static target lib/librte_eventdev.a 00:02:00.487 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:00.487 [339/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:00.487 [340/710] Linking static target lib/librte_cryptodev.a 00:02:00.487 [341/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:00.759 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:00.759 [343/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:00.759 [344/710] Linking static target lib/librte_ethdev.a 00:02:00.759 [345/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:00.759 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:00.759 [347/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.025 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:01.025 [349/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:01.025 [350/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:01.025 [351/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:01.025 [352/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:01.025 [353/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:01.025 [354/710] Linking static target lib/librte_sched.a 00:02:01.025 [355/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:01.025 [356/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:01.025 [357/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:01.025 [358/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:01.025 [359/710] Linking static target lib/librte_fib.a 00:02:01.285 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:01.285 [361/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:01.285 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:01.285 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:01.285 [364/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:01.285 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:01.549 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:01.549 [367/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:01.549 [368/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:01.549 [369/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:01.549 [370/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.549 [371/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.549 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:01.817 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:01.817 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:02.078 [375/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:02.078 [376/710] Linking static target lib/librte_pdump.a 00:02:02.078 [377/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:02.078 [378/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:02.078 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:02.078 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:02.078 [381/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:02.343 [382/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:02.343 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:02.343 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:02.343 [385/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:02.343 [386/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:02.343 [387/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:02.343 [388/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:02.343 [389/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:02.343 [390/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.611 [391/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:02.611 [392/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.611 [393/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:02.611 [394/710] Linking static target lib/librte_table.a 00:02:02.611 [395/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:02.611 [396/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:02.611 [397/710] Linking static target lib/librte_ipsec.a 00:02:02.876 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:02.876 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:02.876 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:03.136 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:03.399 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.399 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:03.399 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.399 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:03.664 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:03.664 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:03.664 [408/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:03.664 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:03.664 [410/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:03.664 [411/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.664 [412/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:03.664 [413/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:03.664 [414/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:03.664 [415/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:03.926 [416/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.926 [417/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:03.926 [418/710] Linking target lib/librte_eal.so.24.0 00:02:03.926 [419/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.926 [420/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:03.926 [421/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:04.190 [422/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.190 [423/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.190 [424/710] Linking static target drivers/librte_bus_vdev.a 00:02:04.190 [425/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:04.190 [426/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.190 [427/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:04.190 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:04.190 [429/710] Linking target lib/librte_ring.so.24.0 00:02:04.190 [430/710] Linking target lib/librte_meter.so.24.0 00:02:04.454 [431/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:04.454 [432/710] Linking target lib/librte_pci.so.24.0 00:02:04.454 [433/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:04.454 [434/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:04.454 [435/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:04.454 [436/710] Linking target lib/librte_timer.so.24.0 00:02:04.454 [437/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:04.723 [438/710] Linking target lib/librte_rcu.so.24.0 00:02:04.723 [439/710] Linking target lib/librte_mempool.so.24.0 00:02:04.723 [440/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:04.723 [441/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.723 [442/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:04.723 [443/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:04.723 [444/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:04.723 [445/710] Linking target lib/librte_acl.so.24.0 00:02:04.723 [446/710] Linking target lib/librte_cfgfile.so.24.0 00:02:04.723 [447/710] Linking target lib/librte_dmadev.so.24.0 00:02:04.723 [448/710] Linking static target lib/librte_port.a 00:02:04.723 [449/710] Linking target lib/librte_jobstats.so.24.0 00:02:04.723 [450/710] Linking target lib/librte_stack.so.24.0 00:02:04.723 [451/710] Linking target lib/librte_rawdev.so.24.0 00:02:04.723 [452/710] Linking static target lib/librte_graph.a 00:02:04.723 [453/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.723 [454/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.723 [455/710] Linking static target drivers/librte_bus_pci.a 00:02:04.724 [456/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:04.988 [457/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:04.988 [458/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:04.988 [459/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:04.988 [460/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:04.988 [461/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:04.988 [462/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:04.988 [463/710] Linking target lib/librte_rib.so.24.0 00:02:04.988 [464/710] Linking target lib/librte_mbuf.so.24.0 00:02:04.988 [465/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:04.988 [466/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:04.988 [467/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:04.988 [468/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:05.258 [469/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:05.258 [470/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:05.258 [471/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:05.258 [472/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:05.258 [473/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:05.258 [474/710] Linking target lib/librte_fib.so.24.0 00:02:05.258 [475/710] Linking target lib/librte_net.so.24.0 00:02:05.521 [476/710] Linking target lib/librte_bbdev.so.24.0 00:02:05.521 [477/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:05.521 [478/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:05.521 [479/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.521 [480/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:05.521 [481/710] Linking target lib/librte_compressdev.so.24.0 00:02:05.521 [482/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:05.521 [483/710] Linking target lib/librte_gpudev.so.24.0 00:02:05.521 [484/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:05.521 [485/710] Linking target lib/librte_distributor.so.24.0 00:02:05.521 [486/710] Linking target lib/librte_cryptodev.so.24.0 00:02:05.521 [487/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.521 [488/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.521 [489/710] Linking target lib/librte_regexdev.so.24.0 00:02:05.521 [490/710] Linking static target drivers/librte_mempool_ring.a 00:02:05.521 [491/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.521 [492/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:05.521 [493/710] Linking target lib/librte_mldev.so.24.0 00:02:05.521 [494/710] Linking target lib/librte_reorder.so.24.0 00:02:05.521 [495/710] Linking target lib/librte_sched.so.24.0 00:02:05.521 [496/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.521 [497/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:05.521 [498/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:05.785 [499/710] Linking target lib/librte_cmdline.so.24.0 00:02:05.785 [500/710] Linking target lib/librte_hash.so.24.0 00:02:05.785 [501/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:05.785 [502/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:05.786 [503/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:05.786 [504/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:05.786 [505/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:05.786 [506/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:05.786 [507/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:05.786 [508/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.786 [509/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:05.786 [510/710] Linking target lib/librte_security.so.24.0 00:02:05.786 [511/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:06.051 [512/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:06.051 [513/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:06.051 [514/710] Linking target lib/librte_efd.so.24.0 00:02:06.051 [515/710] Linking target lib/librte_lpm.so.24.0 00:02:06.051 [516/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:06.051 [517/710] Linking target lib/librte_member.so.24.0 00:02:06.051 [518/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:06.051 [519/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:06.314 [520/710] Linking target lib/librte_ipsec.so.24.0 00:02:06.314 [521/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:06.314 [522/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:06.314 [523/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:06.314 [524/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:06.314 [525/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:06.574 [526/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:06.574 [527/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:06.574 [528/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:06.574 [529/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:06.574 [530/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:06.574 [531/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:06.836 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:07.099 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:07.099 [534/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:07.099 [535/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:07.099 [536/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:07.099 [537/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:07.099 [538/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:07.366 [539/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:07.366 [540/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:07.366 [541/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:07.367 [542/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:07.629 [543/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:07.629 [544/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:07.629 [545/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:07.894 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:07.894 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:07.894 [548/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:07.894 [549/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:07.894 [550/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:07.894 [551/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:07.894 [552/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:07.894 [553/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:07.894 [554/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:08.159 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:08.159 [556/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:08.159 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:08.421 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:08.421 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:08.686 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:08.945 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:08.945 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:08.945 [563/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:08.945 [564/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:09.206 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:09.206 [566/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.206 [567/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:09.206 [568/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:09.206 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:09.206 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:09.206 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:09.206 [572/710] Linking target lib/librte_ethdev.so.24.0 00:02:09.474 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:09.474 [574/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:09.474 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:09.474 [576/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:09.474 [577/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:09.737 [578/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:09.737 [579/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:09.737 [580/710] Linking target lib/librte_metrics.so.24.0 00:02:09.737 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:09.737 [582/710] Linking target lib/librte_bpf.so.24.0 00:02:09.737 [583/710] Linking target lib/librte_gro.so.24.0 00:02:09.737 [584/710] Linking target lib/librte_eventdev.so.24.0 00:02:09.737 [585/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:09.737 [586/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:09.998 [587/710] Linking target lib/librte_gso.so.24.0 00:02:09.998 [588/710] Linking target lib/librte_ip_frag.so.24.0 00:02:09.998 [589/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:09.998 [590/710] Linking target lib/librte_pcapng.so.24.0 00:02:09.998 [591/710] Linking target lib/librte_power.so.24.0 00:02:09.998 [592/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:09.998 [593/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:09.998 [594/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:09.998 [595/710] Linking target lib/librte_bitratestats.so.24.0 00:02:09.998 [596/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:09.998 [597/710] Linking target lib/librte_latencystats.so.24.0 00:02:09.998 [598/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:09.998 [599/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:10.260 [600/710] Linking static target lib/librte_pdcp.a 00:02:10.261 [601/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:10.261 [602/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:10.261 [603/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:10.261 [604/710] Linking target lib/librte_dispatcher.so.24.0 00:02:10.261 [605/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:10.261 [606/710] Linking target lib/librte_pdump.so.24.0 00:02:10.261 [607/710] Linking target lib/librte_port.so.24.0 00:02:10.261 [608/710] Linking target lib/librte_graph.so.24.0 00:02:10.261 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:10.524 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:10.524 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:10.524 [612/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:10.524 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:10.524 [614/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:10.524 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:10.524 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:10.524 [617/710] Linking target lib/librte_table.so.24.0 00:02:10.790 [618/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.790 [619/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:10.790 [620/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:10.790 [621/710] Linking target lib/librte_pdcp.so.24.0 00:02:10.790 [622/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:10.790 [623/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:10.790 [624/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:10.790 [625/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:10.790 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:11.052 [627/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:11.052 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:11.052 [629/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:11.316 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:11.576 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:11.576 [632/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:11.576 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:11.576 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:11.576 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:11.836 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:11.836 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:11.836 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:11.836 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:11.836 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:11.836 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:12.095 [642/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:12.095 [643/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:12.095 [644/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:12.095 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:12.354 [646/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:12.354 [647/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:12.613 [648/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:12.613 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:12.613 [650/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:12.872 [651/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:12.872 [652/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:12.872 [653/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:12.872 [654/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:12.872 [655/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:12.872 [656/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:12.872 [657/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:12.872 [658/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:13.132 [659/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:13.391 [660/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:13.391 [661/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:13.391 [662/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:13.391 [663/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:13.391 [664/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:13.391 [665/710] Linking static target drivers/librte_net_i40e.a 00:02:13.648 [666/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:13.906 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:13.906 [668/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.906 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:14.164 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:14.422 [671/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:14.680 [672/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:14.680 [673/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:14.680 [674/710] Linking static target lib/librte_node.a 00:02:14.938 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.938 [676/710] Linking target lib/librte_node.so.24.0 00:02:16.309 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:16.309 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:16.309 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:18.210 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:18.776 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:25.339 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:57.443 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.443 [684/710] Linking static target lib/librte_vhost.a 00:02:57.443 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.443 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:07.427 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:07.427 [688/710] Linking static target lib/librte_pipeline.a 00:03:07.427 [689/710] Linking target app/dpdk-pdump 00:03:07.427 [690/710] Linking target app/dpdk-proc-info 00:03:07.427 [691/710] Linking target app/dpdk-dumpcap 00:03:07.427 [692/710] Linking target app/dpdk-test-cmdline 00:03:07.427 [693/710] Linking target app/dpdk-test-acl 00:03:07.427 [694/710] Linking target app/dpdk-test-fib 00:03:07.427 [695/710] Linking target app/dpdk-test-dma-perf 00:03:07.427 [696/710] Linking target app/dpdk-test-mldev 00:03:07.427 [697/710] Linking target app/dpdk-test-pipeline 00:03:07.427 [698/710] Linking target app/dpdk-graph 00:03:07.427 [699/710] Linking target app/dpdk-test-sad 00:03:07.427 [700/710] Linking target app/dpdk-test-compress-perf 00:03:07.427 [701/710] Linking target app/dpdk-test-regex 00:03:07.427 [702/710] Linking target app/dpdk-test-security-perf 00:03:07.427 [703/710] Linking target app/dpdk-test-gpudev 00:03:07.427 [704/710] Linking target app/dpdk-test-flow-perf 00:03:07.427 [705/710] Linking target app/dpdk-test-bbdev 00:03:07.427 [706/710] Linking target app/dpdk-test-eventdev 00:03:07.427 [707/710] Linking target app/dpdk-test-crypto-perf 00:03:07.427 [708/710] Linking target app/dpdk-testpmd 00:03:09.332 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.332 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:09.332 18:23:55 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:09.332 18:23:55 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:09.332 18:23:55 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:09.332 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:09.332 [0/1] Installing files. 00:03:09.595 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.600 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:09.600 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.600 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:09.601 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.541 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.541 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.541 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.541 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:10.541 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.542 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.543 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.544 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:10.545 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:10.545 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:10.545 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:10.545 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:10.545 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:10.545 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:10.545 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:10.545 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:10.545 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:10.545 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:10.545 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:10.545 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:10.545 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:10.545 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:10.545 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:10.545 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:10.546 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:10.546 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:10.546 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:10.546 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:10.546 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:10.546 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:10.546 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:10.546 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:10.546 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:10.546 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:10.546 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:10.546 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:10.546 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:10.546 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:10.546 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:10.546 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:10.546 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:10.546 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:10.546 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:10.546 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:10.546 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:10.546 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:10.546 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:10.546 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:10.546 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:10.546 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:10.546 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:10.546 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:10.546 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:10.546 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:10.546 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:10.546 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:10.546 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:10.546 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:10.546 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:10.546 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:10.546 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:10.546 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:10.546 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:10.546 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:10.546 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:10.546 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:10.546 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:10.546 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:10.546 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:10.546 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:10.546 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:10.546 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:10.546 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:10.546 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:10.546 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:10.546 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:10.546 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:10.546 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:10.546 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:10.546 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:10.546 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:10.546 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:10.546 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:10.546 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:10.546 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:10.546 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:10.546 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:10.546 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:10.546 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:10.546 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:10.546 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:10.546 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:10.546 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:10.546 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:10.546 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:10.546 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:10.546 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:10.546 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:10.546 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:10.546 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:10.546 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:10.546 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:10.546 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:10.546 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:10.546 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:10.546 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:10.546 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:10.546 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:10.546 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:10.546 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:10.546 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:10.546 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:10.546 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:10.546 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:10.546 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:10.546 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:10.546 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:10.546 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:10.546 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:10.547 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:10.547 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:10.547 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:10.547 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:10.547 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:10.547 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:10.547 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:10.547 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:10.547 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:10.547 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:10.547 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:10.547 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:10.547 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:10.547 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:10.547 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:10.547 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:10.547 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:10.547 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:10.547 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:10.547 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:10.547 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:10.547 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:10.547 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:10.547 18:23:56 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:10.547 18:23:56 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:10.547 00:03:10.547 real 1m27.008s 00:03:10.547 user 18m5.062s 00:03:10.547 sys 2m8.300s 00:03:10.547 18:23:56 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:10.547 18:23:56 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:10.547 ************************************ 00:03:10.547 END TEST build_native_dpdk 00:03:10.547 ************************************ 00:03:10.547 18:23:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:10.547 18:23:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:10.547 18:23:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:10.547 18:23:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:10.547 18:23:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:10.547 18:23:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:10.547 18:23:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:10.547 18:23:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:10.547 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:10.547 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:10.547 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:10.805 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:11.064 Using 'verbs' RDMA provider 00:03:21.615 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:31.601 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:31.601 Creating mk/config.mk...done. 00:03:31.601 Creating mk/cc.flags.mk...done. 00:03:31.601 Type 'make' to build. 00:03:31.601 18:24:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:31.601 18:24:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:31.601 18:24:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:31.601 18:24:17 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.601 ************************************ 00:03:31.601 START TEST make 00:03:31.601 ************************************ 00:03:31.601 18:24:17 make -- common/autotest_common.sh@1129 -- $ make -j48 00:03:31.601 make[1]: Nothing to be done for 'all'. 00:03:32.998 The Meson build system 00:03:32.998 Version: 1.5.0 00:03:32.998 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:32.998 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:32.998 Build type: native build 00:03:32.998 Project name: libvfio-user 00:03:32.998 Project version: 0.0.1 00:03:32.998 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:32.998 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:32.998 Host machine cpu family: x86_64 00:03:32.998 Host machine cpu: x86_64 00:03:32.998 Run-time dependency threads found: YES 00:03:32.998 Library dl found: YES 00:03:32.998 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:32.998 Run-time dependency json-c found: YES 0.17 00:03:32.998 Run-time dependency cmocka found: YES 1.1.7 00:03:32.998 Program pytest-3 found: NO 00:03:32.998 Program flake8 found: NO 00:03:32.998 Program misspell-fixer found: NO 00:03:32.998 Program restructuredtext-lint found: NO 00:03:32.998 Program valgrind found: YES (/usr/bin/valgrind) 00:03:32.998 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:32.998 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:32.998 Compiler for C supports arguments -Wwrite-strings: YES 00:03:32.998 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:32.998 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:32.998 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:32.998 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:32.998 Build targets in project: 8 00:03:32.999 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:32.999 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:32.999 00:03:32.999 libvfio-user 0.0.1 00:03:32.999 00:03:32.999 User defined options 00:03:32.999 buildtype : debug 00:03:32.999 default_library: shared 00:03:32.999 libdir : /usr/local/lib 00:03:32.999 00:03:32.999 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:33.578 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:33.838 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:33.838 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:33.838 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:33.838 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:33.838 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:33.838 [6/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:33.838 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:33.838 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:33.838 [9/37] Compiling C object samples/null.p/null.c.o 00:03:34.100 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:34.100 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:34.100 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:34.100 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:34.100 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:34.100 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:34.100 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:34.100 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:34.100 [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:34.100 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:34.100 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:34.100 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:34.100 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:34.100 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:34.100 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:34.100 [25/37] Compiling C object samples/server.p/server.c.o 00:03:34.100 [26/37] Compiling C object samples/client.p/client.c.o 00:03:34.100 [27/37] Linking target samples/client 00:03:34.361 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:34.361 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:34.361 [30/37] Linking target test/unit_tests 00:03:34.361 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:34.623 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:34.623 [33/37] Linking target samples/gpio-pci-idio-16 00:03:34.623 [34/37] Linking target samples/null 00:03:34.623 [35/37] Linking target samples/server 00:03:34.623 [36/37] Linking target samples/lspci 00:03:34.623 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:34.623 INFO: autodetecting backend as ninja 00:03:34.623 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:34.887 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:35.465 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:35.465 ninja: no work to do. 00:04:14.171 CC lib/log/log.o 00:04:14.171 CC lib/ut_mock/mock.o 00:04:14.171 CC lib/log/log_flags.o 00:04:14.171 CC lib/log/log_deprecated.o 00:04:14.171 CC lib/ut/ut.o 00:04:14.171 LIB libspdk_ut.a 00:04:14.171 LIB libspdk_log.a 00:04:14.171 LIB libspdk_ut_mock.a 00:04:14.171 SO libspdk_ut.so.2.0 00:04:14.171 SO libspdk_log.so.7.1 00:04:14.171 SO libspdk_ut_mock.so.6.0 00:04:14.171 SYMLINK libspdk_ut.so 00:04:14.171 SYMLINK libspdk_ut_mock.so 00:04:14.171 SYMLINK libspdk_log.so 00:04:14.171 CC lib/dma/dma.o 00:04:14.171 CXX lib/trace_parser/trace.o 00:04:14.171 CC lib/util/base64.o 00:04:14.171 CC lib/ioat/ioat.o 00:04:14.171 CC lib/util/bit_array.o 00:04:14.171 CC lib/util/cpuset.o 00:04:14.171 CC lib/util/crc16.o 00:04:14.171 CC lib/util/crc32.o 00:04:14.171 CC lib/util/crc32c.o 00:04:14.171 CC lib/util/crc32_ieee.o 00:04:14.171 CC lib/util/crc64.o 00:04:14.171 CC lib/util/dif.o 00:04:14.171 CC lib/util/fd.o 00:04:14.171 CC lib/util/fd_group.o 00:04:14.171 CC lib/util/file.o 00:04:14.171 CC lib/util/hexlify.o 00:04:14.171 CC lib/util/iov.o 00:04:14.171 CC lib/util/math.o 00:04:14.171 CC lib/util/net.o 00:04:14.171 CC lib/util/pipe.o 00:04:14.171 CC lib/util/strerror_tls.o 00:04:14.171 CC lib/util/string.o 00:04:14.171 CC lib/util/uuid.o 00:04:14.171 CC lib/util/xor.o 00:04:14.171 CC lib/util/md5.o 00:04:14.171 CC lib/util/zipf.o 00:04:14.171 CC lib/vfio_user/host/vfio_user_pci.o 00:04:14.171 CC lib/vfio_user/host/vfio_user.o 00:04:14.171 LIB libspdk_dma.a 00:04:14.171 SO libspdk_dma.so.5.0 00:04:14.171 LIB libspdk_ioat.a 00:04:14.171 SYMLINK libspdk_dma.so 00:04:14.171 SO libspdk_ioat.so.7.0 00:04:14.171 SYMLINK libspdk_ioat.so 00:04:14.171 LIB libspdk_vfio_user.a 00:04:14.171 SO libspdk_vfio_user.so.5.0 00:04:14.171 SYMLINK libspdk_vfio_user.so 00:04:14.171 LIB libspdk_util.a 00:04:14.171 SO libspdk_util.so.10.1 00:04:14.171 SYMLINK libspdk_util.so 00:04:14.171 CC lib/idxd/idxd.o 00:04:14.171 CC lib/env_dpdk/env.o 00:04:14.171 CC lib/vmd/vmd.o 00:04:14.171 CC lib/json/json_parse.o 00:04:14.171 CC lib/idxd/idxd_user.o 00:04:14.171 CC lib/conf/conf.o 00:04:14.171 CC lib/env_dpdk/memory.o 00:04:14.171 CC lib/rdma_utils/rdma_utils.o 00:04:14.171 CC lib/vmd/led.o 00:04:14.171 CC lib/json/json_util.o 00:04:14.171 CC lib/idxd/idxd_kernel.o 00:04:14.171 CC lib/json/json_write.o 00:04:14.171 CC lib/env_dpdk/pci.o 00:04:14.171 CC lib/env_dpdk/init.o 00:04:14.171 CC lib/env_dpdk/threads.o 00:04:14.171 CC lib/env_dpdk/pci_ioat.o 00:04:14.171 CC lib/env_dpdk/pci_virtio.o 00:04:14.171 CC lib/env_dpdk/pci_vmd.o 00:04:14.171 CC lib/env_dpdk/pci_idxd.o 00:04:14.171 CC lib/env_dpdk/pci_event.o 00:04:14.171 CC lib/env_dpdk/sigbus_handler.o 00:04:14.171 CC lib/env_dpdk/pci_dpdk.o 00:04:14.171 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:14.171 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:14.171 LIB libspdk_trace_parser.a 00:04:14.171 SO libspdk_trace_parser.so.6.0 00:04:14.171 SYMLINK libspdk_trace_parser.so 00:04:14.171 LIB libspdk_conf.a 00:04:14.171 SO libspdk_conf.so.6.0 00:04:14.171 LIB libspdk_rdma_utils.a 00:04:14.171 SYMLINK libspdk_conf.so 00:04:14.171 LIB libspdk_json.a 00:04:14.171 SO libspdk_rdma_utils.so.1.0 00:04:14.171 SO libspdk_json.so.6.0 00:04:14.171 SYMLINK libspdk_rdma_utils.so 00:04:14.171 SYMLINK libspdk_json.so 00:04:14.171 CC lib/rdma_provider/common.o 00:04:14.171 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:14.171 CC lib/jsonrpc/jsonrpc_server.o 00:04:14.171 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:14.171 CC lib/jsonrpc/jsonrpc_client.o 00:04:14.171 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:14.171 LIB libspdk_idxd.a 00:04:14.171 SO libspdk_idxd.so.12.1 00:04:14.171 LIB libspdk_vmd.a 00:04:14.171 SYMLINK libspdk_idxd.so 00:04:14.171 SO libspdk_vmd.so.6.0 00:04:14.171 SYMLINK libspdk_vmd.so 00:04:14.171 LIB libspdk_rdma_provider.a 00:04:14.171 SO libspdk_rdma_provider.so.7.0 00:04:14.171 LIB libspdk_jsonrpc.a 00:04:14.171 SYMLINK libspdk_rdma_provider.so 00:04:14.171 SO libspdk_jsonrpc.so.6.0 00:04:14.171 SYMLINK libspdk_jsonrpc.so 00:04:14.171 CC lib/rpc/rpc.o 00:04:14.171 LIB libspdk_rpc.a 00:04:14.171 SO libspdk_rpc.so.6.0 00:04:14.171 SYMLINK libspdk_rpc.so 00:04:14.171 CC lib/notify/notify.o 00:04:14.171 CC lib/trace/trace.o 00:04:14.171 CC lib/notify/notify_rpc.o 00:04:14.171 CC lib/keyring/keyring.o 00:04:14.171 CC lib/trace/trace_flags.o 00:04:14.171 CC lib/keyring/keyring_rpc.o 00:04:14.171 CC lib/trace/trace_rpc.o 00:04:14.171 LIB libspdk_notify.a 00:04:14.171 SO libspdk_notify.so.6.0 00:04:14.171 SYMLINK libspdk_notify.so 00:04:14.171 LIB libspdk_keyring.a 00:04:14.171 SO libspdk_keyring.so.2.0 00:04:14.171 LIB libspdk_trace.a 00:04:14.171 SO libspdk_trace.so.11.0 00:04:14.171 SYMLINK libspdk_keyring.so 00:04:14.171 SYMLINK libspdk_trace.so 00:04:14.171 LIB libspdk_env_dpdk.a 00:04:14.430 CC lib/sock/sock.o 00:04:14.430 CC lib/sock/sock_rpc.o 00:04:14.430 SO libspdk_env_dpdk.so.15.1 00:04:14.430 CC lib/thread/thread.o 00:04:14.430 CC lib/thread/iobuf.o 00:04:14.430 SYMLINK libspdk_env_dpdk.so 00:04:14.688 LIB libspdk_sock.a 00:04:14.688 SO libspdk_sock.so.10.0 00:04:14.688 SYMLINK libspdk_sock.so 00:04:14.946 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:14.946 CC lib/nvme/nvme_ctrlr.o 00:04:14.946 CC lib/nvme/nvme_fabric.o 00:04:14.946 CC lib/nvme/nvme_ns_cmd.o 00:04:14.946 CC lib/nvme/nvme_ns.o 00:04:14.946 CC lib/nvme/nvme_pcie_common.o 00:04:14.946 CC lib/nvme/nvme_pcie.o 00:04:14.946 CC lib/nvme/nvme_qpair.o 00:04:14.946 CC lib/nvme/nvme.o 00:04:14.946 CC lib/nvme/nvme_quirks.o 00:04:14.946 CC lib/nvme/nvme_transport.o 00:04:14.946 CC lib/nvme/nvme_discovery.o 00:04:14.946 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:14.946 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:14.946 CC lib/nvme/nvme_tcp.o 00:04:14.946 CC lib/nvme/nvme_opal.o 00:04:14.946 CC lib/nvme/nvme_io_msg.o 00:04:14.946 CC lib/nvme/nvme_poll_group.o 00:04:14.946 CC lib/nvme/nvme_zns.o 00:04:14.946 CC lib/nvme/nvme_stubs.o 00:04:14.946 CC lib/nvme/nvme_auth.o 00:04:14.946 CC lib/nvme/nvme_cuse.o 00:04:14.946 CC lib/nvme/nvme_vfio_user.o 00:04:14.946 CC lib/nvme/nvme_rdma.o 00:04:16.325 LIB libspdk_thread.a 00:04:16.325 SO libspdk_thread.so.11.0 00:04:16.325 SYMLINK libspdk_thread.so 00:04:16.584 CC lib/virtio/virtio.o 00:04:16.584 CC lib/init/json_config.o 00:04:16.584 CC lib/blob/blobstore.o 00:04:16.584 CC lib/accel/accel_rpc.o 00:04:16.584 CC lib/fsdev/fsdev.o 00:04:16.584 CC lib/blob/request.o 00:04:16.584 CC lib/accel/accel.o 00:04:16.584 CC lib/init/subsystem.o 00:04:16.584 CC lib/accel/accel_sw.o 00:04:16.584 CC lib/virtio/virtio_vhost_user.o 00:04:16.584 CC lib/vfu_tgt/tgt_rpc.o 00:04:16.584 CC lib/blob/zeroes.o 00:04:16.584 CC lib/vfu_tgt/tgt_endpoint.o 00:04:16.584 CC lib/fsdev/fsdev_io.o 00:04:16.584 CC lib/init/subsystem_rpc.o 00:04:16.584 CC lib/virtio/virtio_vfio_user.o 00:04:16.584 CC lib/blob/blob_bs_dev.o 00:04:16.584 CC lib/fsdev/fsdev_rpc.o 00:04:16.584 CC lib/init/rpc.o 00:04:16.584 CC lib/virtio/virtio_pci.o 00:04:16.843 LIB libspdk_init.a 00:04:16.843 SO libspdk_init.so.6.0 00:04:16.843 LIB libspdk_virtio.a 00:04:16.843 SYMLINK libspdk_init.so 00:04:16.843 LIB libspdk_vfu_tgt.a 00:04:16.843 SO libspdk_virtio.so.7.0 00:04:16.843 SO libspdk_vfu_tgt.so.3.0 00:04:17.101 SYMLINK libspdk_vfu_tgt.so 00:04:17.101 SYMLINK libspdk_virtio.so 00:04:17.101 CC lib/event/app.o 00:04:17.101 CC lib/event/reactor.o 00:04:17.101 CC lib/event/log_rpc.o 00:04:17.101 CC lib/event/app_rpc.o 00:04:17.101 CC lib/event/scheduler_static.o 00:04:17.359 LIB libspdk_fsdev.a 00:04:17.359 SO libspdk_fsdev.so.2.0 00:04:17.359 SYMLINK libspdk_fsdev.so 00:04:17.359 LIB libspdk_nvme.a 00:04:17.617 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:17.617 LIB libspdk_event.a 00:04:17.617 SO libspdk_event.so.14.0 00:04:17.617 SO libspdk_nvme.so.15.0 00:04:17.617 SYMLINK libspdk_event.so 00:04:17.617 LIB libspdk_accel.a 00:04:17.874 SO libspdk_accel.so.16.0 00:04:17.874 SYMLINK libspdk_accel.so 00:04:17.874 SYMLINK libspdk_nvme.so 00:04:18.131 CC lib/bdev/bdev.o 00:04:18.131 CC lib/bdev/bdev_rpc.o 00:04:18.131 CC lib/bdev/bdev_zone.o 00:04:18.131 CC lib/bdev/part.o 00:04:18.131 CC lib/bdev/scsi_nvme.o 00:04:18.131 LIB libspdk_fuse_dispatcher.a 00:04:18.131 SO libspdk_fuse_dispatcher.so.1.0 00:04:18.131 SYMLINK libspdk_fuse_dispatcher.so 00:04:20.097 LIB libspdk_blob.a 00:04:20.097 SO libspdk_blob.so.11.0 00:04:20.097 SYMLINK libspdk_blob.so 00:04:20.097 CC lib/lvol/lvol.o 00:04:20.097 CC lib/blobfs/blobfs.o 00:04:20.097 CC lib/blobfs/tree.o 00:04:20.686 LIB libspdk_bdev.a 00:04:20.686 SO libspdk_bdev.so.17.0 00:04:20.686 LIB libspdk_blobfs.a 00:04:20.951 SO libspdk_blobfs.so.10.0 00:04:20.951 SYMLINK libspdk_bdev.so 00:04:20.951 SYMLINK libspdk_blobfs.so 00:04:20.951 LIB libspdk_lvol.a 00:04:20.951 SO libspdk_lvol.so.10.0 00:04:20.951 CC lib/scsi/dev.o 00:04:20.951 CC lib/ftl/ftl_core.o 00:04:20.951 CC lib/nvmf/ctrlr.o 00:04:20.951 CC lib/nbd/nbd.o 00:04:20.951 CC lib/ftl/ftl_init.o 00:04:20.951 CC lib/ublk/ublk.o 00:04:20.951 CC lib/scsi/lun.o 00:04:20.951 CC lib/nvmf/ctrlr_discovery.o 00:04:20.951 CC lib/ftl/ftl_layout.o 00:04:20.951 CC lib/nbd/nbd_rpc.o 00:04:20.951 CC lib/ublk/ublk_rpc.o 00:04:20.951 CC lib/scsi/port.o 00:04:20.951 CC lib/nvmf/ctrlr_bdev.o 00:04:20.951 CC lib/ftl/ftl_debug.o 00:04:20.951 CC lib/scsi/scsi.o 00:04:20.951 CC lib/ftl/ftl_io.o 00:04:20.951 CC lib/nvmf/subsystem.o 00:04:20.951 CC lib/scsi/scsi_bdev.o 00:04:20.951 CC lib/nvmf/nvmf.o 00:04:20.951 CC lib/ftl/ftl_sb.o 00:04:20.951 CC lib/scsi/scsi_pr.o 00:04:20.951 CC lib/nvmf/nvmf_rpc.o 00:04:20.951 CC lib/scsi/scsi_rpc.o 00:04:20.951 CC lib/ftl/ftl_l2p.o 00:04:20.951 CC lib/nvmf/tcp.o 00:04:20.951 CC lib/nvmf/transport.o 00:04:20.951 CC lib/ftl/ftl_l2p_flat.o 00:04:20.951 CC lib/scsi/task.o 00:04:20.951 CC lib/ftl/ftl_nv_cache.o 00:04:20.951 CC lib/ftl/ftl_band_ops.o 00:04:20.951 CC lib/ftl/ftl_band.o 00:04:20.951 CC lib/nvmf/stubs.o 00:04:20.951 CC lib/nvmf/mdns_server.o 00:04:20.951 CC lib/ftl/ftl_writer.o 00:04:20.951 CC lib/nvmf/vfio_user.o 00:04:20.951 CC lib/ftl/ftl_rq.o 00:04:20.951 CC lib/nvmf/rdma.o 00:04:20.951 CC lib/ftl/ftl_l2p_cache.o 00:04:20.951 CC lib/nvmf/auth.o 00:04:20.951 CC lib/ftl/ftl_reloc.o 00:04:20.951 CC lib/ftl/ftl_p2l.o 00:04:20.951 CC lib/ftl/ftl_p2l_log.o 00:04:20.951 CC lib/ftl/mngt/ftl_mngt.o 00:04:20.951 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:20.951 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:20.951 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:20.951 SYMLINK libspdk_lvol.so 00:04:20.951 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:20.951 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:21.529 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:21.529 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:21.529 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:21.529 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:21.529 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:21.529 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:21.529 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:21.529 CC lib/ftl/utils/ftl_conf.o 00:04:21.529 CC lib/ftl/utils/ftl_md.o 00:04:21.529 CC lib/ftl/utils/ftl_mempool.o 00:04:21.529 CC lib/ftl/utils/ftl_bitmap.o 00:04:21.529 CC lib/ftl/utils/ftl_property.o 00:04:21.529 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:21.529 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:21.529 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:21.529 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:21.529 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:21.529 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:21.529 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:21.529 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:21.788 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:21.788 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:21.788 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:21.788 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:21.788 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:21.788 CC lib/ftl/base/ftl_base_dev.o 00:04:21.788 CC lib/ftl/base/ftl_base_bdev.o 00:04:21.788 CC lib/ftl/ftl_trace.o 00:04:21.788 LIB libspdk_nbd.a 00:04:21.788 SO libspdk_nbd.so.7.0 00:04:22.047 SYMLINK libspdk_nbd.so 00:04:22.047 LIB libspdk_scsi.a 00:04:22.047 SO libspdk_scsi.so.9.0 00:04:22.047 LIB libspdk_ublk.a 00:04:22.047 SO libspdk_ublk.so.3.0 00:04:22.047 SYMLINK libspdk_scsi.so 00:04:22.307 SYMLINK libspdk_ublk.so 00:04:22.307 CC lib/vhost/vhost.o 00:04:22.307 CC lib/iscsi/conn.o 00:04:22.307 CC lib/vhost/vhost_rpc.o 00:04:22.307 CC lib/iscsi/init_grp.o 00:04:22.307 CC lib/vhost/vhost_scsi.o 00:04:22.307 CC lib/iscsi/iscsi.o 00:04:22.307 CC lib/vhost/vhost_blk.o 00:04:22.307 CC lib/iscsi/param.o 00:04:22.307 CC lib/iscsi/portal_grp.o 00:04:22.307 CC lib/vhost/rte_vhost_user.o 00:04:22.307 CC lib/iscsi/tgt_node.o 00:04:22.307 CC lib/iscsi/iscsi_subsystem.o 00:04:22.307 CC lib/iscsi/iscsi_rpc.o 00:04:22.307 CC lib/iscsi/task.o 00:04:22.566 LIB libspdk_ftl.a 00:04:22.823 SO libspdk_ftl.so.9.0 00:04:23.081 SYMLINK libspdk_ftl.so 00:04:23.648 LIB libspdk_vhost.a 00:04:23.648 SO libspdk_vhost.so.8.0 00:04:23.648 SYMLINK libspdk_vhost.so 00:04:23.648 LIB libspdk_nvmf.a 00:04:23.648 SO libspdk_nvmf.so.20.0 00:04:23.906 LIB libspdk_iscsi.a 00:04:23.906 SO libspdk_iscsi.so.8.0 00:04:23.906 SYMLINK libspdk_nvmf.so 00:04:23.906 SYMLINK libspdk_iscsi.so 00:04:24.164 CC module/vfu_device/vfu_virtio.o 00:04:24.164 CC module/vfu_device/vfu_virtio_blk.o 00:04:24.164 CC module/env_dpdk/env_dpdk_rpc.o 00:04:24.164 CC module/vfu_device/vfu_virtio_scsi.o 00:04:24.164 CC module/vfu_device/vfu_virtio_rpc.o 00:04:24.164 CC module/vfu_device/vfu_virtio_fs.o 00:04:24.422 CC module/accel/ioat/accel_ioat.o 00:04:24.422 CC module/keyring/file/keyring.o 00:04:24.422 CC module/accel/dsa/accel_dsa.o 00:04:24.422 CC module/scheduler/gscheduler/gscheduler.o 00:04:24.422 CC module/keyring/linux/keyring.o 00:04:24.422 CC module/accel/iaa/accel_iaa.o 00:04:24.422 CC module/accel/ioat/accel_ioat_rpc.o 00:04:24.422 CC module/accel/dsa/accel_dsa_rpc.o 00:04:24.422 CC module/accel/error/accel_error.o 00:04:24.422 CC module/keyring/linux/keyring_rpc.o 00:04:24.422 CC module/accel/iaa/accel_iaa_rpc.o 00:04:24.422 CC module/blob/bdev/blob_bdev.o 00:04:24.422 CC module/keyring/file/keyring_rpc.o 00:04:24.422 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:24.422 CC module/sock/posix/posix.o 00:04:24.422 CC module/accel/error/accel_error_rpc.o 00:04:24.422 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:24.422 CC module/fsdev/aio/fsdev_aio.o 00:04:24.422 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:24.422 CC module/fsdev/aio/linux_aio_mgr.o 00:04:24.422 LIB libspdk_env_dpdk_rpc.a 00:04:24.422 SO libspdk_env_dpdk_rpc.so.6.0 00:04:24.422 SYMLINK libspdk_env_dpdk_rpc.so 00:04:24.422 LIB libspdk_scheduler_gscheduler.a 00:04:24.681 SO libspdk_scheduler_gscheduler.so.4.0 00:04:24.681 LIB libspdk_scheduler_dpdk_governor.a 00:04:24.681 LIB libspdk_accel_ioat.a 00:04:24.681 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:24.681 LIB libspdk_accel_iaa.a 00:04:24.681 LIB libspdk_accel_error.a 00:04:24.681 LIB libspdk_keyring_file.a 00:04:24.681 SO libspdk_accel_ioat.so.6.0 00:04:24.681 LIB libspdk_keyring_linux.a 00:04:24.681 SYMLINK libspdk_scheduler_gscheduler.so 00:04:24.681 LIB libspdk_scheduler_dynamic.a 00:04:24.681 SO libspdk_accel_iaa.so.3.0 00:04:24.681 SO libspdk_accel_error.so.2.0 00:04:24.681 SO libspdk_keyring_file.so.2.0 00:04:24.681 SO libspdk_keyring_linux.so.1.0 00:04:24.681 SO libspdk_scheduler_dynamic.so.4.0 00:04:24.681 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:24.681 SYMLINK libspdk_accel_ioat.so 00:04:24.681 LIB libspdk_blob_bdev.a 00:04:24.681 SYMLINK libspdk_accel_error.so 00:04:24.681 SYMLINK libspdk_accel_iaa.so 00:04:24.681 SYMLINK libspdk_keyring_file.so 00:04:24.681 SYMLINK libspdk_keyring_linux.so 00:04:24.681 SYMLINK libspdk_scheduler_dynamic.so 00:04:24.681 SO libspdk_blob_bdev.so.11.0 00:04:24.681 LIB libspdk_accel_dsa.a 00:04:24.681 SO libspdk_accel_dsa.so.5.0 00:04:24.681 SYMLINK libspdk_blob_bdev.so 00:04:24.681 SYMLINK libspdk_accel_dsa.so 00:04:24.940 LIB libspdk_vfu_device.a 00:04:24.940 SO libspdk_vfu_device.so.3.0 00:04:24.940 CC module/blobfs/bdev/blobfs_bdev.o 00:04:24.940 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:24.940 CC module/bdev/delay/vbdev_delay.o 00:04:24.940 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:24.940 CC module/bdev/gpt/gpt.o 00:04:24.940 CC module/bdev/malloc/bdev_malloc.o 00:04:24.940 CC module/bdev/raid/bdev_raid.o 00:04:24.941 CC module/bdev/error/vbdev_error.o 00:04:24.941 CC module/bdev/lvol/vbdev_lvol.o 00:04:24.941 CC module/bdev/nvme/bdev_nvme.o 00:04:24.941 CC module/bdev/split/vbdev_split.o 00:04:24.941 CC module/bdev/raid/bdev_raid_rpc.o 00:04:24.941 CC module/bdev/split/vbdev_split_rpc.o 00:04:24.941 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:24.941 CC module/bdev/error/vbdev_error_rpc.o 00:04:24.941 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:24.941 CC module/bdev/gpt/vbdev_gpt.o 00:04:24.941 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:24.941 CC module/bdev/raid/bdev_raid_sb.o 00:04:24.941 CC module/bdev/nvme/nvme_rpc.o 00:04:24.941 CC module/bdev/nvme/bdev_mdns_client.o 00:04:24.941 CC module/bdev/raid/raid0.o 00:04:24.941 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:24.941 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:24.941 CC module/bdev/raid/raid1.o 00:04:24.941 CC module/bdev/nvme/vbdev_opal.o 00:04:24.941 CC module/bdev/raid/concat.o 00:04:24.941 CC module/bdev/aio/bdev_aio.o 00:04:24.941 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:24.941 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:24.941 CC module/bdev/aio/bdev_aio_rpc.o 00:04:24.941 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:24.941 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:24.941 CC module/bdev/null/bdev_null.o 00:04:24.941 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:24.941 CC module/bdev/null/bdev_null_rpc.o 00:04:24.941 CC module/bdev/ftl/bdev_ftl.o 00:04:24.941 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:24.941 CC module/bdev/iscsi/bdev_iscsi.o 00:04:24.941 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:24.941 CC module/bdev/passthru/vbdev_passthru.o 00:04:24.941 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:25.203 SYMLINK libspdk_vfu_device.so 00:04:25.203 LIB libspdk_fsdev_aio.a 00:04:25.203 LIB libspdk_sock_posix.a 00:04:25.463 SO libspdk_fsdev_aio.so.1.0 00:04:25.463 SO libspdk_sock_posix.so.6.0 00:04:25.463 LIB libspdk_blobfs_bdev.a 00:04:25.463 SYMLINK libspdk_fsdev_aio.so 00:04:25.463 SO libspdk_blobfs_bdev.so.6.0 00:04:25.463 LIB libspdk_bdev_split.a 00:04:25.463 SYMLINK libspdk_sock_posix.so 00:04:25.463 LIB libspdk_bdev_ftl.a 00:04:25.463 SO libspdk_bdev_split.so.6.0 00:04:25.463 SO libspdk_bdev_ftl.so.6.0 00:04:25.463 LIB libspdk_bdev_null.a 00:04:25.463 LIB libspdk_bdev_gpt.a 00:04:25.463 SYMLINK libspdk_blobfs_bdev.so 00:04:25.463 LIB libspdk_bdev_error.a 00:04:25.463 LIB libspdk_bdev_passthru.a 00:04:25.463 SYMLINK libspdk_bdev_split.so 00:04:25.463 SO libspdk_bdev_null.so.6.0 00:04:25.463 SO libspdk_bdev_gpt.so.6.0 00:04:25.463 SO libspdk_bdev_error.so.6.0 00:04:25.463 SO libspdk_bdev_passthru.so.6.0 00:04:25.463 SYMLINK libspdk_bdev_ftl.so 00:04:25.463 LIB libspdk_bdev_aio.a 00:04:25.722 LIB libspdk_bdev_malloc.a 00:04:25.722 LIB libspdk_bdev_iscsi.a 00:04:25.722 SYMLINK libspdk_bdev_null.so 00:04:25.722 SYMLINK libspdk_bdev_gpt.so 00:04:25.722 LIB libspdk_bdev_zone_block.a 00:04:25.722 SYMLINK libspdk_bdev_error.so 00:04:25.723 SO libspdk_bdev_aio.so.6.0 00:04:25.723 SYMLINK libspdk_bdev_passthru.so 00:04:25.723 SO libspdk_bdev_malloc.so.6.0 00:04:25.723 SO libspdk_bdev_iscsi.so.6.0 00:04:25.723 LIB libspdk_bdev_delay.a 00:04:25.723 SO libspdk_bdev_zone_block.so.6.0 00:04:25.723 SO libspdk_bdev_delay.so.6.0 00:04:25.723 SYMLINK libspdk_bdev_aio.so 00:04:25.723 SYMLINK libspdk_bdev_malloc.so 00:04:25.723 SYMLINK libspdk_bdev_iscsi.so 00:04:25.723 SYMLINK libspdk_bdev_zone_block.so 00:04:25.723 SYMLINK libspdk_bdev_delay.so 00:04:25.723 LIB libspdk_bdev_lvol.a 00:04:25.723 SO libspdk_bdev_lvol.so.6.0 00:04:25.723 LIB libspdk_bdev_virtio.a 00:04:25.723 SYMLINK libspdk_bdev_lvol.so 00:04:25.983 SO libspdk_bdev_virtio.so.6.0 00:04:25.983 SYMLINK libspdk_bdev_virtio.so 00:04:26.243 LIB libspdk_bdev_raid.a 00:04:26.243 SO libspdk_bdev_raid.so.6.0 00:04:26.502 SYMLINK libspdk_bdev_raid.so 00:04:27.880 LIB libspdk_bdev_nvme.a 00:04:27.880 SO libspdk_bdev_nvme.so.7.1 00:04:27.880 SYMLINK libspdk_bdev_nvme.so 00:04:28.139 CC module/event/subsystems/keyring/keyring.o 00:04:28.139 CC module/event/subsystems/sock/sock.o 00:04:28.139 CC module/event/subsystems/iobuf/iobuf.o 00:04:28.139 CC module/event/subsystems/fsdev/fsdev.o 00:04:28.139 CC module/event/subsystems/vmd/vmd.o 00:04:28.139 CC module/event/subsystems/scheduler/scheduler.o 00:04:28.139 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:28.139 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:28.139 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:28.139 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:28.398 LIB libspdk_event_keyring.a 00:04:28.398 LIB libspdk_event_vhost_blk.a 00:04:28.398 LIB libspdk_event_fsdev.a 00:04:28.398 LIB libspdk_event_scheduler.a 00:04:28.398 LIB libspdk_event_vfu_tgt.a 00:04:28.398 LIB libspdk_event_vmd.a 00:04:28.398 LIB libspdk_event_sock.a 00:04:28.398 SO libspdk_event_keyring.so.1.0 00:04:28.398 LIB libspdk_event_iobuf.a 00:04:28.398 SO libspdk_event_fsdev.so.1.0 00:04:28.398 SO libspdk_event_vhost_blk.so.3.0 00:04:28.398 SO libspdk_event_scheduler.so.4.0 00:04:28.398 SO libspdk_event_vfu_tgt.so.3.0 00:04:28.398 SO libspdk_event_sock.so.5.0 00:04:28.398 SO libspdk_event_vmd.so.6.0 00:04:28.398 SO libspdk_event_iobuf.so.3.0 00:04:28.398 SYMLINK libspdk_event_keyring.so 00:04:28.398 SYMLINK libspdk_event_fsdev.so 00:04:28.398 SYMLINK libspdk_event_vhost_blk.so 00:04:28.398 SYMLINK libspdk_event_scheduler.so 00:04:28.398 SYMLINK libspdk_event_vfu_tgt.so 00:04:28.398 SYMLINK libspdk_event_sock.so 00:04:28.398 SYMLINK libspdk_event_vmd.so 00:04:28.398 SYMLINK libspdk_event_iobuf.so 00:04:28.657 CC module/event/subsystems/accel/accel.o 00:04:28.657 LIB libspdk_event_accel.a 00:04:28.657 SO libspdk_event_accel.so.6.0 00:04:28.914 SYMLINK libspdk_event_accel.so 00:04:28.914 CC module/event/subsystems/bdev/bdev.o 00:04:29.173 LIB libspdk_event_bdev.a 00:04:29.173 SO libspdk_event_bdev.so.6.0 00:04:29.173 SYMLINK libspdk_event_bdev.so 00:04:29.431 CC module/event/subsystems/scsi/scsi.o 00:04:29.431 CC module/event/subsystems/nbd/nbd.o 00:04:29.431 CC module/event/subsystems/ublk/ublk.o 00:04:29.431 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:29.431 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:29.431 LIB libspdk_event_nbd.a 00:04:29.431 LIB libspdk_event_ublk.a 00:04:29.431 LIB libspdk_event_scsi.a 00:04:29.431 SO libspdk_event_ublk.so.3.0 00:04:29.431 SO libspdk_event_nbd.so.6.0 00:04:29.692 SO libspdk_event_scsi.so.6.0 00:04:29.692 SYMLINK libspdk_event_ublk.so 00:04:29.692 SYMLINK libspdk_event_nbd.so 00:04:29.692 SYMLINK libspdk_event_scsi.so 00:04:29.692 LIB libspdk_event_nvmf.a 00:04:29.692 SO libspdk_event_nvmf.so.6.0 00:04:29.692 SYMLINK libspdk_event_nvmf.so 00:04:29.692 CC module/event/subsystems/iscsi/iscsi.o 00:04:29.692 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:29.952 LIB libspdk_event_vhost_scsi.a 00:04:29.952 LIB libspdk_event_iscsi.a 00:04:29.952 SO libspdk_event_vhost_scsi.so.3.0 00:04:29.952 SO libspdk_event_iscsi.so.6.0 00:04:29.952 SYMLINK libspdk_event_vhost_scsi.so 00:04:29.952 SYMLINK libspdk_event_iscsi.so 00:04:30.211 SO libspdk.so.6.0 00:04:30.211 SYMLINK libspdk.so 00:04:30.211 CC app/trace_record/trace_record.o 00:04:30.211 CC app/spdk_lspci/spdk_lspci.o 00:04:30.211 CC app/spdk_nvme_identify/identify.o 00:04:30.211 CC app/spdk_top/spdk_top.o 00:04:30.211 CXX app/trace/trace.o 00:04:30.211 TEST_HEADER include/spdk/accel.h 00:04:30.211 TEST_HEADER include/spdk/accel_module.h 00:04:30.211 TEST_HEADER include/spdk/assert.h 00:04:30.211 TEST_HEADER include/spdk/barrier.h 00:04:30.211 CC app/spdk_nvme_perf/perf.o 00:04:30.211 TEST_HEADER include/spdk/base64.h 00:04:30.211 TEST_HEADER include/spdk/bdev.h 00:04:30.211 CC test/rpc_client/rpc_client_test.o 00:04:30.211 TEST_HEADER include/spdk/bdev_module.h 00:04:30.211 TEST_HEADER include/spdk/bdev_zone.h 00:04:30.211 TEST_HEADER include/spdk/bit_array.h 00:04:30.211 CC app/spdk_nvme_discover/discovery_aer.o 00:04:30.211 TEST_HEADER include/spdk/bit_pool.h 00:04:30.211 TEST_HEADER include/spdk/blob_bdev.h 00:04:30.211 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:30.211 TEST_HEADER include/spdk/blobfs.h 00:04:30.211 TEST_HEADER include/spdk/blob.h 00:04:30.211 TEST_HEADER include/spdk/conf.h 00:04:30.211 TEST_HEADER include/spdk/config.h 00:04:30.211 TEST_HEADER include/spdk/cpuset.h 00:04:30.211 TEST_HEADER include/spdk/crc16.h 00:04:30.211 TEST_HEADER include/spdk/crc32.h 00:04:30.211 TEST_HEADER include/spdk/crc64.h 00:04:30.211 TEST_HEADER include/spdk/dif.h 00:04:30.211 TEST_HEADER include/spdk/dma.h 00:04:30.478 TEST_HEADER include/spdk/endian.h 00:04:30.478 TEST_HEADER include/spdk/env_dpdk.h 00:04:30.478 TEST_HEADER include/spdk/env.h 00:04:30.478 TEST_HEADER include/spdk/event.h 00:04:30.478 TEST_HEADER include/spdk/fd_group.h 00:04:30.478 TEST_HEADER include/spdk/fd.h 00:04:30.478 TEST_HEADER include/spdk/file.h 00:04:30.478 TEST_HEADER include/spdk/fsdev.h 00:04:30.478 TEST_HEADER include/spdk/fsdev_module.h 00:04:30.478 TEST_HEADER include/spdk/ftl.h 00:04:30.478 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:30.478 TEST_HEADER include/spdk/gpt_spec.h 00:04:30.478 TEST_HEADER include/spdk/hexlify.h 00:04:30.478 TEST_HEADER include/spdk/histogram_data.h 00:04:30.478 TEST_HEADER include/spdk/idxd_spec.h 00:04:30.478 TEST_HEADER include/spdk/idxd.h 00:04:30.478 TEST_HEADER include/spdk/init.h 00:04:30.478 TEST_HEADER include/spdk/ioat_spec.h 00:04:30.478 TEST_HEADER include/spdk/ioat.h 00:04:30.478 TEST_HEADER include/spdk/iscsi_spec.h 00:04:30.478 TEST_HEADER include/spdk/json.h 00:04:30.478 TEST_HEADER include/spdk/jsonrpc.h 00:04:30.478 TEST_HEADER include/spdk/keyring.h 00:04:30.478 TEST_HEADER include/spdk/keyring_module.h 00:04:30.478 TEST_HEADER include/spdk/log.h 00:04:30.478 TEST_HEADER include/spdk/likely.h 00:04:30.478 TEST_HEADER include/spdk/md5.h 00:04:30.478 TEST_HEADER include/spdk/lvol.h 00:04:30.478 TEST_HEADER include/spdk/memory.h 00:04:30.478 TEST_HEADER include/spdk/mmio.h 00:04:30.478 TEST_HEADER include/spdk/nbd.h 00:04:30.478 TEST_HEADER include/spdk/net.h 00:04:30.478 TEST_HEADER include/spdk/notify.h 00:04:30.478 TEST_HEADER include/spdk/nvme.h 00:04:30.478 TEST_HEADER include/spdk/nvme_intel.h 00:04:30.478 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:30.478 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:30.478 TEST_HEADER include/spdk/nvme_spec.h 00:04:30.478 TEST_HEADER include/spdk/nvme_zns.h 00:04:30.478 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:30.478 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:30.478 TEST_HEADER include/spdk/nvmf.h 00:04:30.478 TEST_HEADER include/spdk/nvmf_spec.h 00:04:30.478 TEST_HEADER include/spdk/nvmf_transport.h 00:04:30.478 TEST_HEADER include/spdk/opal_spec.h 00:04:30.478 TEST_HEADER include/spdk/opal.h 00:04:30.478 TEST_HEADER include/spdk/pci_ids.h 00:04:30.478 TEST_HEADER include/spdk/queue.h 00:04:30.478 TEST_HEADER include/spdk/pipe.h 00:04:30.478 TEST_HEADER include/spdk/reduce.h 00:04:30.478 TEST_HEADER include/spdk/rpc.h 00:04:30.478 TEST_HEADER include/spdk/scheduler.h 00:04:30.478 TEST_HEADER include/spdk/scsi.h 00:04:30.478 TEST_HEADER include/spdk/scsi_spec.h 00:04:30.478 TEST_HEADER include/spdk/stdinc.h 00:04:30.478 TEST_HEADER include/spdk/sock.h 00:04:30.478 TEST_HEADER include/spdk/string.h 00:04:30.478 TEST_HEADER include/spdk/thread.h 00:04:30.478 TEST_HEADER include/spdk/trace.h 00:04:30.478 TEST_HEADER include/spdk/trace_parser.h 00:04:30.478 TEST_HEADER include/spdk/tree.h 00:04:30.478 TEST_HEADER include/spdk/ublk.h 00:04:30.478 TEST_HEADER include/spdk/util.h 00:04:30.478 TEST_HEADER include/spdk/uuid.h 00:04:30.478 TEST_HEADER include/spdk/version.h 00:04:30.478 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:30.478 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:30.478 TEST_HEADER include/spdk/vhost.h 00:04:30.478 TEST_HEADER include/spdk/vmd.h 00:04:30.478 TEST_HEADER include/spdk/xor.h 00:04:30.478 TEST_HEADER include/spdk/zipf.h 00:04:30.478 CXX test/cpp_headers/accel.o 00:04:30.478 CXX test/cpp_headers/accel_module.o 00:04:30.478 CXX test/cpp_headers/assert.o 00:04:30.478 CC app/spdk_dd/spdk_dd.o 00:04:30.478 CXX test/cpp_headers/barrier.o 00:04:30.478 CXX test/cpp_headers/base64.o 00:04:30.478 CXX test/cpp_headers/bdev.o 00:04:30.478 CXX test/cpp_headers/bdev_module.o 00:04:30.478 CXX test/cpp_headers/bdev_zone.o 00:04:30.478 CXX test/cpp_headers/bit_array.o 00:04:30.478 CC app/nvmf_tgt/nvmf_main.o 00:04:30.478 CXX test/cpp_headers/bit_pool.o 00:04:30.478 CXX test/cpp_headers/blob_bdev.o 00:04:30.478 CXX test/cpp_headers/blobfs_bdev.o 00:04:30.478 CXX test/cpp_headers/blobfs.o 00:04:30.478 CXX test/cpp_headers/blob.o 00:04:30.478 CXX test/cpp_headers/conf.o 00:04:30.478 CXX test/cpp_headers/config.o 00:04:30.478 CXX test/cpp_headers/cpuset.o 00:04:30.478 CXX test/cpp_headers/crc16.o 00:04:30.478 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:30.478 CC app/iscsi_tgt/iscsi_tgt.o 00:04:30.478 CXX test/cpp_headers/crc32.o 00:04:30.478 CC test/app/jsoncat/jsoncat.o 00:04:30.478 CC test/thread/poller_perf/poller_perf.o 00:04:30.478 CC app/spdk_tgt/spdk_tgt.o 00:04:30.478 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:30.478 CC test/app/histogram_perf/histogram_perf.o 00:04:30.478 CC test/app/stub/stub.o 00:04:30.478 CC test/env/vtophys/vtophys.o 00:04:30.478 CC test/env/memory/memory_ut.o 00:04:30.478 CC examples/ioat/perf/perf.o 00:04:30.478 CC test/env/pci/pci_ut.o 00:04:30.478 CC examples/ioat/verify/verify.o 00:04:30.478 CC examples/util/zipf/zipf.o 00:04:30.478 CC app/fio/nvme/fio_plugin.o 00:04:30.478 CC test/dma/test_dma/test_dma.o 00:04:30.478 CC app/fio/bdev/fio_plugin.o 00:04:30.478 CC test/app/bdev_svc/bdev_svc.o 00:04:30.739 LINK spdk_lspci 00:04:30.739 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:30.739 CC test/env/mem_callbacks/mem_callbacks.o 00:04:30.739 LINK rpc_client_test 00:04:30.739 LINK spdk_nvme_discover 00:04:30.739 LINK jsoncat 00:04:30.739 LINK poller_perf 00:04:30.739 LINK nvmf_tgt 00:04:30.739 LINK vtophys 00:04:30.739 LINK histogram_perf 00:04:30.739 LINK interrupt_tgt 00:04:30.739 CXX test/cpp_headers/crc64.o 00:04:30.739 CXX test/cpp_headers/dif.o 00:04:31.002 CXX test/cpp_headers/dma.o 00:04:31.002 CXX test/cpp_headers/endian.o 00:04:31.002 LINK spdk_trace_record 00:04:31.002 CXX test/cpp_headers/env_dpdk.o 00:04:31.002 LINK zipf 00:04:31.002 CXX test/cpp_headers/env.o 00:04:31.002 LINK env_dpdk_post_init 00:04:31.002 CXX test/cpp_headers/event.o 00:04:31.002 CXX test/cpp_headers/fd.o 00:04:31.002 CXX test/cpp_headers/fd_group.o 00:04:31.002 CXX test/cpp_headers/file.o 00:04:31.002 CXX test/cpp_headers/fsdev.o 00:04:31.002 CXX test/cpp_headers/fsdev_module.o 00:04:31.002 LINK iscsi_tgt 00:04:31.002 LINK stub 00:04:31.002 CXX test/cpp_headers/ftl.o 00:04:31.002 CXX test/cpp_headers/fuse_dispatcher.o 00:04:31.002 CXX test/cpp_headers/gpt_spec.o 00:04:31.002 CXX test/cpp_headers/hexlify.o 00:04:31.002 LINK verify 00:04:31.002 LINK bdev_svc 00:04:31.002 LINK ioat_perf 00:04:31.002 LINK spdk_tgt 00:04:31.002 CXX test/cpp_headers/histogram_data.o 00:04:31.002 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:31.002 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:31.002 CXX test/cpp_headers/idxd.o 00:04:31.002 CXX test/cpp_headers/idxd_spec.o 00:04:31.265 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:31.265 CXX test/cpp_headers/init.o 00:04:31.265 CXX test/cpp_headers/ioat.o 00:04:31.265 LINK spdk_dd 00:04:31.265 CXX test/cpp_headers/ioat_spec.o 00:04:31.265 CXX test/cpp_headers/iscsi_spec.o 00:04:31.265 CXX test/cpp_headers/json.o 00:04:31.265 LINK spdk_trace 00:04:31.265 CXX test/cpp_headers/jsonrpc.o 00:04:31.265 CXX test/cpp_headers/keyring.o 00:04:31.265 CXX test/cpp_headers/keyring_module.o 00:04:31.265 CXX test/cpp_headers/likely.o 00:04:31.265 CXX test/cpp_headers/log.o 00:04:31.265 CXX test/cpp_headers/lvol.o 00:04:31.265 CXX test/cpp_headers/md5.o 00:04:31.265 CXX test/cpp_headers/memory.o 00:04:31.265 LINK pci_ut 00:04:31.265 CXX test/cpp_headers/mmio.o 00:04:31.265 CXX test/cpp_headers/nbd.o 00:04:31.265 CXX test/cpp_headers/net.o 00:04:31.265 CXX test/cpp_headers/notify.o 00:04:31.265 CXX test/cpp_headers/nvme.o 00:04:31.265 CXX test/cpp_headers/nvme_intel.o 00:04:31.265 CXX test/cpp_headers/nvme_ocssd.o 00:04:31.265 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:31.265 CXX test/cpp_headers/nvme_spec.o 00:04:31.265 CXX test/cpp_headers/nvme_zns.o 00:04:31.526 CXX test/cpp_headers/nvmf_cmd.o 00:04:31.526 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:31.526 CXX test/cpp_headers/nvmf.o 00:04:31.526 LINK nvme_fuzz 00:04:31.526 LINK test_dma 00:04:31.526 CC test/event/event_perf/event_perf.o 00:04:31.526 CXX test/cpp_headers/nvmf_spec.o 00:04:31.526 CC test/event/reactor/reactor.o 00:04:31.526 CXX test/cpp_headers/nvmf_transport.o 00:04:31.526 CXX test/cpp_headers/opal.o 00:04:31.526 CXX test/cpp_headers/opal_spec.o 00:04:31.526 CC examples/idxd/perf/perf.o 00:04:31.526 CC examples/vmd/lsvmd/lsvmd.o 00:04:31.526 CC examples/sock/hello_world/hello_sock.o 00:04:31.526 CC examples/thread/thread/thread_ex.o 00:04:31.526 LINK spdk_bdev 00:04:31.526 CC examples/vmd/led/led.o 00:04:31.526 CC test/event/reactor_perf/reactor_perf.o 00:04:31.526 LINK spdk_nvme 00:04:31.788 CXX test/cpp_headers/pci_ids.o 00:04:31.788 CC test/event/app_repeat/app_repeat.o 00:04:31.788 CXX test/cpp_headers/pipe.o 00:04:31.788 CXX test/cpp_headers/queue.o 00:04:31.788 CXX test/cpp_headers/reduce.o 00:04:31.788 CXX test/cpp_headers/rpc.o 00:04:31.788 CXX test/cpp_headers/scheduler.o 00:04:31.788 CXX test/cpp_headers/scsi.o 00:04:31.788 CXX test/cpp_headers/scsi_spec.o 00:04:31.788 CC test/event/scheduler/scheduler.o 00:04:31.788 CXX test/cpp_headers/sock.o 00:04:31.788 CXX test/cpp_headers/stdinc.o 00:04:31.788 CXX test/cpp_headers/string.o 00:04:31.788 CXX test/cpp_headers/thread.o 00:04:31.788 CXX test/cpp_headers/trace.o 00:04:31.788 CXX test/cpp_headers/trace_parser.o 00:04:31.788 CXX test/cpp_headers/tree.o 00:04:31.788 CXX test/cpp_headers/ublk.o 00:04:31.788 CXX test/cpp_headers/util.o 00:04:31.788 CXX test/cpp_headers/uuid.o 00:04:31.788 CXX test/cpp_headers/version.o 00:04:31.788 CXX test/cpp_headers/vfio_user_pci.o 00:04:31.788 CXX test/cpp_headers/vfio_user_spec.o 00:04:31.788 CXX test/cpp_headers/vhost.o 00:04:31.788 LINK event_perf 00:04:31.788 CXX test/cpp_headers/vmd.o 00:04:31.788 LINK reactor 00:04:31.788 CC app/vhost/vhost.o 00:04:31.788 CXX test/cpp_headers/xor.o 00:04:31.788 LINK spdk_nvme_perf 00:04:31.788 CXX test/cpp_headers/zipf.o 00:04:32.051 LINK mem_callbacks 00:04:32.051 LINK lsvmd 00:04:32.051 LINK reactor_perf 00:04:32.051 LINK led 00:04:32.051 LINK vhost_fuzz 00:04:32.051 LINK spdk_nvme_identify 00:04:32.051 LINK app_repeat 00:04:32.051 LINK spdk_top 00:04:32.051 LINK hello_sock 00:04:32.051 LINK thread 00:04:32.313 LINK scheduler 00:04:32.313 CC test/nvme/e2edp/nvme_dp.o 00:04:32.313 CC test/nvme/sgl/sgl.o 00:04:32.313 CC test/nvme/aer/aer.o 00:04:32.313 CC test/nvme/startup/startup.o 00:04:32.313 CC test/nvme/overhead/overhead.o 00:04:32.313 CC test/nvme/err_injection/err_injection.o 00:04:32.313 CC test/nvme/reset/reset.o 00:04:32.313 CC test/nvme/reserve/reserve.o 00:04:32.313 CC test/nvme/simple_copy/simple_copy.o 00:04:32.313 LINK vhost 00:04:32.313 CC test/nvme/boot_partition/boot_partition.o 00:04:32.313 CC test/nvme/connect_stress/connect_stress.o 00:04:32.313 CC test/blobfs/mkfs/mkfs.o 00:04:32.313 CC test/nvme/compliance/nvme_compliance.o 00:04:32.313 CC test/accel/dif/dif.o 00:04:32.313 LINK idxd_perf 00:04:32.313 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:32.313 CC test/nvme/fused_ordering/fused_ordering.o 00:04:32.313 CC test/nvme/fdp/fdp.o 00:04:32.313 CC test/nvme/cuse/cuse.o 00:04:32.313 CC test/lvol/esnap/esnap.o 00:04:32.572 LINK startup 00:04:32.572 LINK boot_partition 00:04:32.572 LINK connect_stress 00:04:32.572 LINK reserve 00:04:32.572 CC examples/nvme/reconnect/reconnect.o 00:04:32.572 LINK err_injection 00:04:32.572 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:32.572 CC examples/nvme/hello_world/hello_world.o 00:04:32.572 CC examples/nvme/arbitration/arbitration.o 00:04:32.572 CC examples/nvme/hotplug/hotplug.o 00:04:32.572 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:32.572 CC examples/nvme/abort/abort.o 00:04:32.572 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:32.572 LINK fused_ordering 00:04:32.572 LINK sgl 00:04:32.572 LINK reset 00:04:32.572 LINK nvme_dp 00:04:32.572 CC examples/accel/perf/accel_perf.o 00:04:32.572 LINK aer 00:04:32.572 LINK overhead 00:04:32.572 LINK doorbell_aers 00:04:32.572 CC examples/blob/cli/blobcli.o 00:04:32.572 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:32.572 LINK memory_ut 00:04:32.830 CC examples/blob/hello_world/hello_blob.o 00:04:32.830 LINK mkfs 00:04:32.830 LINK nvme_compliance 00:04:32.830 LINK simple_copy 00:04:32.830 LINK fdp 00:04:32.830 LINK hotplug 00:04:32.830 LINK hello_world 00:04:32.830 LINK pmr_persistence 00:04:32.830 LINK cmb_copy 00:04:33.087 LINK hello_fsdev 00:04:33.087 LINK abort 00:04:33.087 LINK arbitration 00:04:33.087 LINK reconnect 00:04:33.087 LINK hello_blob 00:04:33.087 LINK nvme_manage 00:04:33.087 LINK dif 00:04:33.346 LINK blobcli 00:04:33.346 LINK accel_perf 00:04:33.346 LINK iscsi_fuzz 00:04:33.604 CC test/bdev/bdevio/bdevio.o 00:04:33.604 CC examples/bdev/hello_world/hello_bdev.o 00:04:33.604 CC examples/bdev/bdevperf/bdevperf.o 00:04:33.864 LINK hello_bdev 00:04:33.864 LINK cuse 00:04:34.123 LINK bdevio 00:04:34.382 LINK bdevperf 00:04:34.949 CC examples/nvmf/nvmf/nvmf.o 00:04:35.207 LINK nvmf 00:04:37.745 LINK esnap 00:04:37.745 00:04:37.745 real 1m7.103s 00:04:37.745 user 9m5.326s 00:04:37.745 sys 2m0.694s 00:04:37.745 18:25:24 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:37.745 18:25:24 make -- common/autotest_common.sh@10 -- $ set +x 00:04:37.745 ************************************ 00:04:37.745 END TEST make 00:04:37.745 ************************************ 00:04:37.745 18:25:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:37.745 18:25:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:37.745 18:25:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:37.745 18:25:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.745 18:25:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:37.745 18:25:24 -- pm/common@44 -- $ pid=497575 00:04:37.745 18:25:24 -- pm/common@50 -- $ kill -TERM 497575 00:04:37.745 18:25:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.745 18:25:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:37.745 18:25:24 -- pm/common@44 -- $ pid=497577 00:04:37.745 18:25:24 -- pm/common@50 -- $ kill -TERM 497577 00:04:37.745 18:25:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.745 18:25:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:37.745 18:25:24 -- pm/common@44 -- $ pid=497579 00:04:37.745 18:25:24 -- pm/common@50 -- $ kill -TERM 497579 00:04:37.745 18:25:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:37.745 18:25:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:37.745 18:25:24 -- pm/common@44 -- $ pid=497607 00:04:37.745 18:25:24 -- pm/common@50 -- $ sudo -E kill -TERM 497607 00:04:38.004 18:25:24 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:38.004 18:25:24 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:38.004 18:25:24 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.004 18:25:24 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.004 18:25:24 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.004 18:25:24 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.004 18:25:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.004 18:25:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.004 18:25:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.004 18:25:24 -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.004 18:25:24 -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.004 18:25:24 -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.004 18:25:24 -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.004 18:25:24 -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.004 18:25:24 -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.004 18:25:24 -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.004 18:25:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.004 18:25:24 -- scripts/common.sh@344 -- # case "$op" in 00:04:38.004 18:25:24 -- scripts/common.sh@345 -- # : 1 00:04:38.004 18:25:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.004 18:25:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.004 18:25:24 -- scripts/common.sh@365 -- # decimal 1 00:04:38.004 18:25:24 -- scripts/common.sh@353 -- # local d=1 00:04:38.004 18:25:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.004 18:25:24 -- scripts/common.sh@355 -- # echo 1 00:04:38.004 18:25:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.004 18:25:24 -- scripts/common.sh@366 -- # decimal 2 00:04:38.004 18:25:24 -- scripts/common.sh@353 -- # local d=2 00:04:38.004 18:25:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.004 18:25:24 -- scripts/common.sh@355 -- # echo 2 00:04:38.004 18:25:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.004 18:25:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.004 18:25:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.004 18:25:24 -- scripts/common.sh@368 -- # return 0 00:04:38.004 18:25:24 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.004 18:25:24 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.004 --rc genhtml_branch_coverage=1 00:04:38.004 --rc genhtml_function_coverage=1 00:04:38.004 --rc genhtml_legend=1 00:04:38.004 --rc geninfo_all_blocks=1 00:04:38.004 --rc geninfo_unexecuted_blocks=1 00:04:38.004 00:04:38.004 ' 00:04:38.004 18:25:24 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.004 --rc genhtml_branch_coverage=1 00:04:38.004 --rc genhtml_function_coverage=1 00:04:38.004 --rc genhtml_legend=1 00:04:38.004 --rc geninfo_all_blocks=1 00:04:38.004 --rc geninfo_unexecuted_blocks=1 00:04:38.004 00:04:38.004 ' 00:04:38.004 18:25:24 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.004 --rc genhtml_branch_coverage=1 00:04:38.004 --rc genhtml_function_coverage=1 00:04:38.004 --rc genhtml_legend=1 00:04:38.004 --rc geninfo_all_blocks=1 00:04:38.004 --rc geninfo_unexecuted_blocks=1 00:04:38.004 00:04:38.004 ' 00:04:38.004 18:25:24 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.004 --rc genhtml_branch_coverage=1 00:04:38.004 --rc genhtml_function_coverage=1 00:04:38.004 --rc genhtml_legend=1 00:04:38.004 --rc geninfo_all_blocks=1 00:04:38.004 --rc geninfo_unexecuted_blocks=1 00:04:38.004 00:04:38.004 ' 00:04:38.004 18:25:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.005 18:25:24 -- nvmf/common.sh@7 -- # uname -s 00:04:38.005 18:25:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.005 18:25:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.005 18:25:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.005 18:25:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.005 18:25:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.005 18:25:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.005 18:25:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.005 18:25:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.005 18:25:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.005 18:25:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.005 18:25:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:38.005 18:25:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:38.005 18:25:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.005 18:25:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.005 18:25:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:38.005 18:25:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.005 18:25:24 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:38.005 18:25:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:38.005 18:25:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.005 18:25:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.005 18:25:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.005 18:25:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.005 18:25:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.005 18:25:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.005 18:25:24 -- paths/export.sh@5 -- # export PATH 00:04:38.005 18:25:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.005 18:25:24 -- nvmf/common.sh@51 -- # : 0 00:04:38.005 18:25:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:38.005 18:25:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:38.005 18:25:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.005 18:25:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.005 18:25:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.005 18:25:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:38.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:38.005 18:25:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:38.005 18:25:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:38.005 18:25:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:38.005 18:25:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:38.005 18:25:24 -- spdk/autotest.sh@32 -- # uname -s 00:04:38.005 18:25:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:38.005 18:25:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:38.005 18:25:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:38.005 18:25:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:38.005 18:25:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:38.005 18:25:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:38.005 18:25:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:38.005 18:25:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:38.005 18:25:24 -- spdk/autotest.sh@48 -- # udevadm_pid=578572 00:04:38.005 18:25:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:38.005 18:25:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:38.005 18:25:24 -- pm/common@17 -- # local monitor 00:04:38.005 18:25:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.005 18:25:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.005 18:25:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.005 18:25:24 -- pm/common@21 -- # date +%s 00:04:38.005 18:25:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:38.005 18:25:24 -- pm/common@21 -- # date +%s 00:04:38.005 18:25:24 -- pm/common@25 -- # sleep 1 00:04:38.005 18:25:24 -- pm/common@21 -- # date +%s 00:04:38.005 18:25:24 -- pm/common@21 -- # date +%s 00:04:38.005 18:25:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731864324 00:04:38.005 18:25:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731864324 00:04:38.005 18:25:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731864324 00:04:38.005 18:25:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731864324 00:04:38.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731864324_collect-cpu-load.pm.log 00:04:38.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731864324_collect-vmstat.pm.log 00:04:38.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731864324_collect-cpu-temp.pm.log 00:04:38.005 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731864324_collect-bmc-pm.bmc.pm.log 00:04:38.945 18:25:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:38.945 18:25:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:38.945 18:25:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.945 18:25:25 -- common/autotest_common.sh@10 -- # set +x 00:04:39.203 18:25:25 -- spdk/autotest.sh@59 -- # create_test_list 00:04:39.203 18:25:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:39.203 18:25:25 -- common/autotest_common.sh@10 -- # set +x 00:04:39.203 18:25:25 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:39.203 18:25:25 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.203 18:25:25 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.203 18:25:25 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:39.203 18:25:25 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.203 18:25:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:39.203 18:25:25 -- common/autotest_common.sh@1457 -- # uname 00:04:39.203 18:25:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:39.203 18:25:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:39.203 18:25:25 -- common/autotest_common.sh@1477 -- # uname 00:04:39.203 18:25:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:39.203 18:25:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:39.203 18:25:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:39.203 lcov: LCOV version 1.15 00:04:39.204 18:25:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:11.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:11.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:16.570 18:26:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:16.570 18:26:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.570 18:26:02 -- common/autotest_common.sh@10 -- # set +x 00:05:16.570 18:26:02 -- spdk/autotest.sh@78 -- # rm -f 00:05:16.570 18:26:02 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.137 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:17.137 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:17.138 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:17.138 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:17.138 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:17.138 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:17.138 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:17.138 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:17.138 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:17.138 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:17.138 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:17.138 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:17.138 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:17.138 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:17.138 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:17.138 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:17.397 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:17.397 18:26:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:17.397 18:26:03 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:17.397 18:26:03 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:17.397 18:26:03 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:17.397 18:26:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:17.397 18:26:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:17.397 18:26:03 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:17.397 18:26:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:17.397 18:26:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:17.397 18:26:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:17.397 18:26:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.397 18:26:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.397 18:26:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:17.397 18:26:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:17.397 18:26:03 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:17.397 No valid GPT data, bailing 00:05:17.397 18:26:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:17.397 18:26:03 -- scripts/common.sh@394 -- # pt= 00:05:17.397 18:26:03 -- scripts/common.sh@395 -- # return 1 00:05:17.397 18:26:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:17.397 1+0 records in 00:05:17.397 1+0 records out 00:05:17.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0022376 s, 469 MB/s 00:05:17.397 18:26:03 -- spdk/autotest.sh@105 -- # sync 00:05:17.397 18:26:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:17.397 18:26:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:17.397 18:26:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:19.934 18:26:05 -- spdk/autotest.sh@111 -- # uname -s 00:05:19.934 18:26:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:19.934 18:26:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:19.934 18:26:05 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:20.501 Hugepages 00:05:20.501 node hugesize free / total 00:05:20.760 node0 1048576kB 0 / 0 00:05:20.760 node0 2048kB 0 / 0 00:05:20.760 node1 1048576kB 0 / 0 00:05:20.760 node1 2048kB 0 / 0 00:05:20.760 00:05:20.760 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:20.760 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:20.760 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:20.760 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:20.760 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:20.760 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:20.760 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:20.760 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:20.760 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:20.760 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:20.760 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:20.760 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:20.760 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:20.760 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:20.760 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:20.760 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:20.760 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:20.760 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:20.760 18:26:07 -- spdk/autotest.sh@117 -- # uname -s 00:05:20.760 18:26:07 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:20.760 18:26:07 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:20.760 18:26:07 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:22.138 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:22.138 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:22.138 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:22.138 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:22.138 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:22.138 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:22.138 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:22.138 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:22.138 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:22.138 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:22.138 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:22.138 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:22.138 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:22.138 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:22.138 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:22.138 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:23.077 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:23.337 18:26:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:24.276 18:26:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:24.276 18:26:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:24.276 18:26:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:24.276 18:26:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:24.276 18:26:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:24.276 18:26:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:24.276 18:26:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.276 18:26:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:24.276 18:26:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:24.276 18:26:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:24.276 18:26:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:24.276 18:26:10 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:25.654 Waiting for block devices as requested 00:05:25.654 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:25.654 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:25.654 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:25.914 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:25.914 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:25.914 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:26.173 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:26.173 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:26.173 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:26.173 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:26.432 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:26.432 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:26.432 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:26.692 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:26.692 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:26.692 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:26.692 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:26.952 18:26:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:26.952 18:26:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:26.952 18:26:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:26.952 18:26:13 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:26.952 18:26:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:26.952 18:26:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:26.952 18:26:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:26.952 18:26:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:26.952 18:26:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:26.952 18:26:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:26.952 18:26:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:26.952 18:26:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:26.952 18:26:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:26.952 18:26:13 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:26.952 18:26:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:26.952 18:26:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:26.952 18:26:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:26.952 18:26:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:26.952 18:26:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:26.952 18:26:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:26.952 18:26:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:26.952 18:26:13 -- common/autotest_common.sh@1543 -- # continue 00:05:26.952 18:26:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:26.952 18:26:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.952 18:26:13 -- common/autotest_common.sh@10 -- # set +x 00:05:26.952 18:26:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:26.952 18:26:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.952 18:26:13 -- common/autotest_common.sh@10 -- # set +x 00:05:26.952 18:26:13 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:28.328 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:28.328 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:28.328 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:28.328 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:28.328 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:28.328 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:28.328 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:28.328 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:28.328 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:28.328 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:28.328 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:28.328 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:28.328 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:28.328 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:28.328 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:28.328 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:29.267 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:29.267 18:26:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:29.267 18:26:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.267 18:26:15 -- common/autotest_common.sh@10 -- # set +x 00:05:29.267 18:26:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:29.267 18:26:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:29.526 18:26:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.526 18:26:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:29.526 18:26:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:29.526 18:26:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:29.526 18:26:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:29.526 18:26:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:29.526 18:26:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:29.526 18:26:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:29.526 18:26:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.526 18:26:15 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:29.526 18:26:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:29.526 18:26:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:29.526 18:26:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:29.526 18:26:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:29.526 18:26:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:29.526 18:26:15 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:29.526 18:26:15 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:29.526 18:26:15 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:29.526 18:26:15 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:29.526 18:26:15 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:05:29.526 18:26:15 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:05:29.526 18:26:15 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=589251 00:05:29.526 18:26:15 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.526 18:26:15 -- common/autotest_common.sh@1585 -- # waitforlisten 589251 00:05:29.526 18:26:15 -- common/autotest_common.sh@835 -- # '[' -z 589251 ']' 00:05:29.526 18:26:15 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.526 18:26:15 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.526 18:26:15 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.526 18:26:15 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.526 18:26:15 -- common/autotest_common.sh@10 -- # set +x 00:05:29.526 [2024-11-17 18:26:15.957931] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:29.526 [2024-11-17 18:26:15.958017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid589251 ] 00:05:29.526 [2024-11-17 18:26:16.019858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.526 [2024-11-17 18:26:16.068679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.784 18:26:16 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.784 18:26:16 -- common/autotest_common.sh@868 -- # return 0 00:05:29.784 18:26:16 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:29.784 18:26:16 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:29.784 18:26:16 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:33.070 nvme0n1 00:05:33.070 18:26:19 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:33.328 [2024-11-17 18:26:19.671984] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:33.328 [2024-11-17 18:26:19.672025] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:33.328 request: 00:05:33.328 { 00:05:33.328 "nvme_ctrlr_name": "nvme0", 00:05:33.328 "password": "test", 00:05:33.328 "method": "bdev_nvme_opal_revert", 00:05:33.328 "req_id": 1 00:05:33.328 } 00:05:33.328 Got JSON-RPC error response 00:05:33.328 response: 00:05:33.328 { 00:05:33.328 "code": -32603, 00:05:33.328 "message": "Internal error" 00:05:33.328 } 00:05:33.328 18:26:19 -- common/autotest_common.sh@1591 -- # true 00:05:33.328 18:26:19 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:33.328 18:26:19 -- common/autotest_common.sh@1595 -- # killprocess 589251 00:05:33.328 18:26:19 -- common/autotest_common.sh@954 -- # '[' -z 589251 ']' 00:05:33.328 18:26:19 -- common/autotest_common.sh@958 -- # kill -0 589251 00:05:33.328 18:26:19 -- common/autotest_common.sh@959 -- # uname 00:05:33.328 18:26:19 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.328 18:26:19 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589251 00:05:33.328 18:26:19 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.328 18:26:19 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.328 18:26:19 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589251' 00:05:33.328 killing process with pid 589251 00:05:33.328 18:26:19 -- common/autotest_common.sh@973 -- # kill 589251 00:05:33.328 18:26:19 -- common/autotest_common.sh@978 -- # wait 589251 00:05:35.228 18:26:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:35.228 18:26:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:35.228 18:26:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:35.228 18:26:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:35.228 18:26:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:35.228 18:26:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.228 18:26:21 -- common/autotest_common.sh@10 -- # set +x 00:05:35.228 18:26:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:35.228 18:26:21 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:35.228 18:26:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.228 18:26:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.228 18:26:21 -- common/autotest_common.sh@10 -- # set +x 00:05:35.228 ************************************ 00:05:35.228 START TEST env 00:05:35.228 ************************************ 00:05:35.228 18:26:21 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:35.228 * Looking for test storage... 00:05:35.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.229 18:26:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.229 18:26:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.229 18:26:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.229 18:26:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.229 18:26:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.229 18:26:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.229 18:26:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.229 18:26:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.229 18:26:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.229 18:26:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.229 18:26:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.229 18:26:21 env -- scripts/common.sh@344 -- # case "$op" in 00:05:35.229 18:26:21 env -- scripts/common.sh@345 -- # : 1 00:05:35.229 18:26:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.229 18:26:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.229 18:26:21 env -- scripts/common.sh@365 -- # decimal 1 00:05:35.229 18:26:21 env -- scripts/common.sh@353 -- # local d=1 00:05:35.229 18:26:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.229 18:26:21 env -- scripts/common.sh@355 -- # echo 1 00:05:35.229 18:26:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.229 18:26:21 env -- scripts/common.sh@366 -- # decimal 2 00:05:35.229 18:26:21 env -- scripts/common.sh@353 -- # local d=2 00:05:35.229 18:26:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.229 18:26:21 env -- scripts/common.sh@355 -- # echo 2 00:05:35.229 18:26:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.229 18:26:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.229 18:26:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.229 18:26:21 env -- scripts/common.sh@368 -- # return 0 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.229 --rc genhtml_branch_coverage=1 00:05:35.229 --rc genhtml_function_coverage=1 00:05:35.229 --rc genhtml_legend=1 00:05:35.229 --rc geninfo_all_blocks=1 00:05:35.229 --rc geninfo_unexecuted_blocks=1 00:05:35.229 00:05:35.229 ' 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.229 --rc genhtml_branch_coverage=1 00:05:35.229 --rc genhtml_function_coverage=1 00:05:35.229 --rc genhtml_legend=1 00:05:35.229 --rc geninfo_all_blocks=1 00:05:35.229 --rc geninfo_unexecuted_blocks=1 00:05:35.229 00:05:35.229 ' 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.229 --rc genhtml_branch_coverage=1 00:05:35.229 --rc genhtml_function_coverage=1 00:05:35.229 --rc genhtml_legend=1 00:05:35.229 --rc geninfo_all_blocks=1 00:05:35.229 --rc geninfo_unexecuted_blocks=1 00:05:35.229 00:05:35.229 ' 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.229 --rc genhtml_branch_coverage=1 00:05:35.229 --rc genhtml_function_coverage=1 00:05:35.229 --rc genhtml_legend=1 00:05:35.229 --rc geninfo_all_blocks=1 00:05:35.229 --rc geninfo_unexecuted_blocks=1 00:05:35.229 00:05:35.229 ' 00:05:35.229 18:26:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.229 18:26:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.229 ************************************ 00:05:35.229 START TEST env_memory 00:05:35.229 ************************************ 00:05:35.229 18:26:21 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.229 00:05:35.229 00:05:35.229 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.229 http://cunit.sourceforge.net/ 00:05:35.229 00:05:35.229 00:05:35.229 Suite: memory 00:05:35.229 Test: alloc and free memory map ...[2024-11-17 18:26:21.643913] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:35.229 passed 00:05:35.229 Test: mem map translation ...[2024-11-17 18:26:21.663776] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:35.229 [2024-11-17 18:26:21.663796] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:35.229 [2024-11-17 18:26:21.663853] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:35.229 [2024-11-17 18:26:21.663865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:35.229 passed 00:05:35.229 Test: mem map registration ...[2024-11-17 18:26:21.704994] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:35.229 [2024-11-17 18:26:21.705012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:35.229 passed 00:05:35.229 Test: mem map adjacent registrations ...passed 00:05:35.229 00:05:35.229 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.229 suites 1 1 n/a 0 0 00:05:35.229 tests 4 4 4 0 0 00:05:35.229 asserts 152 152 152 0 n/a 00:05:35.229 00:05:35.229 Elapsed time = 0.144 seconds 00:05:35.229 00:05:35.229 real 0m0.152s 00:05:35.229 user 0m0.142s 00:05:35.229 sys 0m0.010s 00:05:35.229 18:26:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.229 18:26:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:35.229 ************************************ 00:05:35.229 END TEST env_memory 00:05:35.229 ************************************ 00:05:35.229 18:26:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.229 18:26:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.229 18:26:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.489 ************************************ 00:05:35.489 START TEST env_vtophys 00:05:35.489 ************************************ 00:05:35.489 18:26:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.489 EAL: lib.eal log level changed from notice to debug 00:05:35.489 EAL: Detected lcore 0 as core 0 on socket 0 00:05:35.489 EAL: Detected lcore 1 as core 1 on socket 0 00:05:35.489 EAL: Detected lcore 2 as core 2 on socket 0 00:05:35.489 EAL: Detected lcore 3 as core 3 on socket 0 00:05:35.489 EAL: Detected lcore 4 as core 4 on socket 0 00:05:35.489 EAL: Detected lcore 5 as core 5 on socket 0 00:05:35.489 EAL: Detected lcore 6 as core 8 on socket 0 00:05:35.489 EAL: Detected lcore 7 as core 9 on socket 0 00:05:35.489 EAL: Detected lcore 8 as core 10 on socket 0 00:05:35.489 EAL: Detected lcore 9 as core 11 on socket 0 00:05:35.489 EAL: Detected lcore 10 as core 12 on socket 0 00:05:35.489 EAL: Detected lcore 11 as core 13 on socket 0 00:05:35.489 EAL: Detected lcore 12 as core 0 on socket 1 00:05:35.489 EAL: Detected lcore 13 as core 1 on socket 1 00:05:35.489 EAL: Detected lcore 14 as core 2 on socket 1 00:05:35.489 EAL: Detected lcore 15 as core 3 on socket 1 00:05:35.489 EAL: Detected lcore 16 as core 4 on socket 1 00:05:35.489 EAL: Detected lcore 17 as core 5 on socket 1 00:05:35.489 EAL: Detected lcore 18 as core 8 on socket 1 00:05:35.489 EAL: Detected lcore 19 as core 9 on socket 1 00:05:35.489 EAL: Detected lcore 20 as core 10 on socket 1 00:05:35.489 EAL: Detected lcore 21 as core 11 on socket 1 00:05:35.489 EAL: Detected lcore 22 as core 12 on socket 1 00:05:35.489 EAL: Detected lcore 23 as core 13 on socket 1 00:05:35.489 EAL: Detected lcore 24 as core 0 on socket 0 00:05:35.489 EAL: Detected lcore 25 as core 1 on socket 0 00:05:35.489 EAL: Detected lcore 26 as core 2 on socket 0 00:05:35.489 EAL: Detected lcore 27 as core 3 on socket 0 00:05:35.489 EAL: Detected lcore 28 as core 4 on socket 0 00:05:35.489 EAL: Detected lcore 29 as core 5 on socket 0 00:05:35.489 EAL: Detected lcore 30 as core 8 on socket 0 00:05:35.489 EAL: Detected lcore 31 as core 9 on socket 0 00:05:35.489 EAL: Detected lcore 32 as core 10 on socket 0 00:05:35.489 EAL: Detected lcore 33 as core 11 on socket 0 00:05:35.489 EAL: Detected lcore 34 as core 12 on socket 0 00:05:35.489 EAL: Detected lcore 35 as core 13 on socket 0 00:05:35.489 EAL: Detected lcore 36 as core 0 on socket 1 00:05:35.489 EAL: Detected lcore 37 as core 1 on socket 1 00:05:35.489 EAL: Detected lcore 38 as core 2 on socket 1 00:05:35.489 EAL: Detected lcore 39 as core 3 on socket 1 00:05:35.490 EAL: Detected lcore 40 as core 4 on socket 1 00:05:35.490 EAL: Detected lcore 41 as core 5 on socket 1 00:05:35.490 EAL: Detected lcore 42 as core 8 on socket 1 00:05:35.490 EAL: Detected lcore 43 as core 9 on socket 1 00:05:35.490 EAL: Detected lcore 44 as core 10 on socket 1 00:05:35.490 EAL: Detected lcore 45 as core 11 on socket 1 00:05:35.490 EAL: Detected lcore 46 as core 12 on socket 1 00:05:35.490 EAL: Detected lcore 47 as core 13 on socket 1 00:05:35.490 EAL: Maximum logical cores by configuration: 128 00:05:35.490 EAL: Detected CPU lcores: 48 00:05:35.490 EAL: Detected NUMA nodes: 2 00:05:35.490 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:35.490 EAL: Detected shared linkage of DPDK 00:05:35.490 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:35.490 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:35.490 EAL: Registered [vdev] bus. 00:05:35.490 EAL: bus.vdev log level changed from disabled to notice 00:05:35.490 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:35.490 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:35.490 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:35.490 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:35.490 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:35.490 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:35.490 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:35.490 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:35.490 EAL: No shared files mode enabled, IPC will be disabled 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Bus pci wants IOVA as 'DC' 00:05:35.490 EAL: Bus vdev wants IOVA as 'DC' 00:05:35.490 EAL: Buses did not request a specific IOVA mode. 00:05:35.490 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:35.490 EAL: Selected IOVA mode 'VA' 00:05:35.490 EAL: Probing VFIO support... 00:05:35.490 EAL: IOMMU type 1 (Type 1) is supported 00:05:35.490 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:35.490 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:35.490 EAL: VFIO support initialized 00:05:35.490 EAL: Ask a virtual area of 0x2e000 bytes 00:05:35.490 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:35.490 EAL: Setting up physically contiguous memory... 00:05:35.490 EAL: Setting maximum number of open files to 524288 00:05:35.490 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:35.490 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:35.490 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:35.490 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.490 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:35.490 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.490 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.490 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:35.490 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:35.490 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.490 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:35.490 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.490 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.490 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:35.490 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:35.490 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.490 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:35.490 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.490 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.490 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:35.490 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:35.490 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.490 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:35.490 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.490 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.490 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:35.490 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:35.490 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:35.490 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.490 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:35.490 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.490 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.490 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:35.490 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:35.490 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.490 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:35.490 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.490 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.490 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:35.490 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:35.490 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.490 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:35.490 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.490 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.490 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:35.490 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:35.490 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.490 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:35.490 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.490 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.490 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:35.490 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:35.490 EAL: Hugepages will be freed exactly as allocated. 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: TSC frequency is ~2700000 KHz 00:05:35.490 EAL: Main lcore 0 is ready (tid=7f1343e99a00;cpuset=[0]) 00:05:35.490 EAL: Trying to obtain current memory policy. 00:05:35.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.490 EAL: Restoring previous memory policy: 0 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Heap on socket 0 was expanded by 2MB 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:35.490 EAL: Mem event callback 'spdk:(nil)' registered 00:05:35.490 00:05:35.490 00:05:35.490 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.490 http://cunit.sourceforge.net/ 00:05:35.490 00:05:35.490 00:05:35.490 Suite: components_suite 00:05:35.490 Test: vtophys_malloc_test ...passed 00:05:35.490 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:35.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.490 EAL: Restoring previous memory policy: 4 00:05:35.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Heap on socket 0 was expanded by 4MB 00:05:35.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Heap on socket 0 was shrunk by 4MB 00:05:35.490 EAL: Trying to obtain current memory policy. 00:05:35.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.490 EAL: Restoring previous memory policy: 4 00:05:35.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Heap on socket 0 was expanded by 6MB 00:05:35.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Heap on socket 0 was shrunk by 6MB 00:05:35.490 EAL: Trying to obtain current memory policy. 00:05:35.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.490 EAL: Restoring previous memory policy: 4 00:05:35.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Heap on socket 0 was expanded by 10MB 00:05:35.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Heap on socket 0 was shrunk by 10MB 00:05:35.490 EAL: Trying to obtain current memory policy. 00:05:35.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.490 EAL: Restoring previous memory policy: 4 00:05:35.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Heap on socket 0 was expanded by 18MB 00:05:35.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Heap on socket 0 was shrunk by 18MB 00:05:35.490 EAL: Trying to obtain current memory policy. 00:05:35.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.490 EAL: Restoring previous memory policy: 4 00:05:35.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Heap on socket 0 was expanded by 34MB 00:05:35.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.490 EAL: Heap on socket 0 was shrunk by 34MB 00:05:35.490 EAL: Trying to obtain current memory policy. 00:05:35.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.490 EAL: Restoring previous memory policy: 4 00:05:35.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.490 EAL: request: mp_malloc_sync 00:05:35.490 EAL: No shared files mode enabled, IPC is disabled 00:05:35.491 EAL: Heap on socket 0 was expanded by 66MB 00:05:35.491 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.491 EAL: request: mp_malloc_sync 00:05:35.491 EAL: No shared files mode enabled, IPC is disabled 00:05:35.491 EAL: Heap on socket 0 was shrunk by 66MB 00:05:35.491 EAL: Trying to obtain current memory policy. 00:05:35.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.491 EAL: Restoring previous memory policy: 4 00:05:35.491 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.491 EAL: request: mp_malloc_sync 00:05:35.491 EAL: No shared files mode enabled, IPC is disabled 00:05:35.491 EAL: Heap on socket 0 was expanded by 130MB 00:05:35.491 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.491 EAL: request: mp_malloc_sync 00:05:35.491 EAL: No shared files mode enabled, IPC is disabled 00:05:35.491 EAL: Heap on socket 0 was shrunk by 130MB 00:05:35.491 EAL: Trying to obtain current memory policy. 00:05:35.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.749 EAL: Restoring previous memory policy: 4 00:05:35.749 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.749 EAL: request: mp_malloc_sync 00:05:35.749 EAL: No shared files mode enabled, IPC is disabled 00:05:35.749 EAL: Heap on socket 0 was expanded by 258MB 00:05:35.749 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.749 EAL: request: mp_malloc_sync 00:05:35.749 EAL: No shared files mode enabled, IPC is disabled 00:05:35.749 EAL: Heap on socket 0 was shrunk by 258MB 00:05:35.749 EAL: Trying to obtain current memory policy. 00:05:35.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.008 EAL: Restoring previous memory policy: 4 00:05:36.008 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.008 EAL: request: mp_malloc_sync 00:05:36.008 EAL: No shared files mode enabled, IPC is disabled 00:05:36.008 EAL: Heap on socket 0 was expanded by 514MB 00:05:36.008 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.008 EAL: request: mp_malloc_sync 00:05:36.008 EAL: No shared files mode enabled, IPC is disabled 00:05:36.008 EAL: Heap on socket 0 was shrunk by 514MB 00:05:36.008 EAL: Trying to obtain current memory policy. 00:05:36.008 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.266 EAL: Restoring previous memory policy: 4 00:05:36.266 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.266 EAL: request: mp_malloc_sync 00:05:36.266 EAL: No shared files mode enabled, IPC is disabled 00:05:36.266 EAL: Heap on socket 0 was expanded by 1026MB 00:05:36.524 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.783 EAL: request: mp_malloc_sync 00:05:36.783 EAL: No shared files mode enabled, IPC is disabled 00:05:36.783 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:36.783 passed 00:05:36.783 00:05:36.783 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.783 suites 1 1 n/a 0 0 00:05:36.783 tests 2 2 2 0 0 00:05:36.783 asserts 497 497 497 0 n/a 00:05:36.783 00:05:36.783 Elapsed time = 1.314 seconds 00:05:36.783 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.783 EAL: request: mp_malloc_sync 00:05:36.783 EAL: No shared files mode enabled, IPC is disabled 00:05:36.783 EAL: Heap on socket 0 was shrunk by 2MB 00:05:36.783 EAL: No shared files mode enabled, IPC is disabled 00:05:36.783 EAL: No shared files mode enabled, IPC is disabled 00:05:36.783 EAL: No shared files mode enabled, IPC is disabled 00:05:36.783 00:05:36.783 real 0m1.433s 00:05:36.783 user 0m0.837s 00:05:36.783 sys 0m0.562s 00:05:36.783 18:26:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.783 18:26:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:36.783 ************************************ 00:05:36.783 END TEST env_vtophys 00:05:36.783 ************************************ 00:05:36.783 18:26:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.783 18:26:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.783 18:26:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.783 18:26:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.783 ************************************ 00:05:36.783 START TEST env_pci 00:05:36.783 ************************************ 00:05:36.783 18:26:23 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.783 00:05:36.783 00:05:36.783 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.783 http://cunit.sourceforge.net/ 00:05:36.783 00:05:36.783 00:05:36.783 Suite: pci 00:05:36.783 Test: pci_hook ...[2024-11-17 18:26:23.304399] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 590146 has claimed it 00:05:36.783 EAL: Cannot find device (10000:00:01.0) 00:05:36.783 EAL: Failed to attach device on primary process 00:05:36.783 passed 00:05:36.783 00:05:36.783 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.783 suites 1 1 n/a 0 0 00:05:36.783 tests 1 1 1 0 0 00:05:36.783 asserts 25 25 25 0 n/a 00:05:36.783 00:05:36.783 Elapsed time = 0.021 seconds 00:05:36.783 00:05:36.783 real 0m0.033s 00:05:36.783 user 0m0.012s 00:05:36.783 sys 0m0.020s 00:05:36.783 18:26:23 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.783 18:26:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:36.783 ************************************ 00:05:36.783 END TEST env_pci 00:05:36.783 ************************************ 00:05:36.783 18:26:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:36.783 18:26:23 env -- env/env.sh@15 -- # uname 00:05:36.783 18:26:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:36.783 18:26:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:36.783 18:26:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.783 18:26:23 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:36.783 18:26:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.783 18:26:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.044 ************************************ 00:05:37.044 START TEST env_dpdk_post_init 00:05:37.044 ************************************ 00:05:37.044 18:26:23 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:37.044 EAL: Detected CPU lcores: 48 00:05:37.044 EAL: Detected NUMA nodes: 2 00:05:37.044 EAL: Detected shared linkage of DPDK 00:05:37.044 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.044 EAL: Selected IOVA mode 'VA' 00:05:37.044 EAL: VFIO support initialized 00:05:37.044 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.044 EAL: Using IOMMU type 1 (Type 1) 00:05:37.044 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:37.044 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:37.044 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:37.044 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:37.044 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:37.044 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:37.044 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:37.044 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:37.044 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:37.044 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:37.044 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:37.305 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:37.305 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:37.305 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:37.305 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:37.305 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:37.876 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:41.159 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:41.159 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:41.417 Starting DPDK initialization... 00:05:41.417 Starting SPDK post initialization... 00:05:41.417 SPDK NVMe probe 00:05:41.417 Attaching to 0000:88:00.0 00:05:41.417 Attached to 0000:88:00.0 00:05:41.417 Cleaning up... 00:05:41.417 00:05:41.417 real 0m4.425s 00:05:41.417 user 0m3.305s 00:05:41.417 sys 0m0.186s 00:05:41.417 18:26:27 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.417 18:26:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.417 ************************************ 00:05:41.417 END TEST env_dpdk_post_init 00:05:41.417 ************************************ 00:05:41.417 18:26:27 env -- env/env.sh@26 -- # uname 00:05:41.417 18:26:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:41.417 18:26:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.417 18:26:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.417 18:26:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.417 18:26:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.417 ************************************ 00:05:41.417 START TEST env_mem_callbacks 00:05:41.417 ************************************ 00:05:41.417 18:26:27 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.417 EAL: Detected CPU lcores: 48 00:05:41.417 EAL: Detected NUMA nodes: 2 00:05:41.417 EAL: Detected shared linkage of DPDK 00:05:41.417 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.417 EAL: Selected IOVA mode 'VA' 00:05:41.417 EAL: VFIO support initialized 00:05:41.417 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.417 00:05:41.417 00:05:41.417 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.417 http://cunit.sourceforge.net/ 00:05:41.417 00:05:41.417 00:05:41.417 Suite: memory 00:05:41.417 Test: test ... 00:05:41.417 register 0x200000200000 2097152 00:05:41.417 malloc 3145728 00:05:41.417 register 0x200000400000 4194304 00:05:41.417 buf 0x200000500000 len 3145728 PASSED 00:05:41.417 malloc 64 00:05:41.417 buf 0x2000004fff40 len 64 PASSED 00:05:41.417 malloc 4194304 00:05:41.417 register 0x200000800000 6291456 00:05:41.417 buf 0x200000a00000 len 4194304 PASSED 00:05:41.417 free 0x200000500000 3145728 00:05:41.417 free 0x2000004fff40 64 00:05:41.417 unregister 0x200000400000 4194304 PASSED 00:05:41.417 free 0x200000a00000 4194304 00:05:41.417 unregister 0x200000800000 6291456 PASSED 00:05:41.417 malloc 8388608 00:05:41.417 register 0x200000400000 10485760 00:05:41.417 buf 0x200000600000 len 8388608 PASSED 00:05:41.417 free 0x200000600000 8388608 00:05:41.417 unregister 0x200000400000 10485760 PASSED 00:05:41.417 passed 00:05:41.417 00:05:41.417 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.417 suites 1 1 n/a 0 0 00:05:41.417 tests 1 1 1 0 0 00:05:41.417 asserts 15 15 15 0 n/a 00:05:41.417 00:05:41.417 Elapsed time = 0.004 seconds 00:05:41.417 00:05:41.417 real 0m0.049s 00:05:41.417 user 0m0.011s 00:05:41.417 sys 0m0.037s 00:05:41.417 18:26:27 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.417 18:26:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:41.418 ************************************ 00:05:41.418 END TEST env_mem_callbacks 00:05:41.418 ************************************ 00:05:41.418 00:05:41.418 real 0m6.484s 00:05:41.418 user 0m4.518s 00:05:41.418 sys 0m1.016s 00:05:41.418 18:26:27 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.418 18:26:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.418 ************************************ 00:05:41.418 END TEST env 00:05:41.418 ************************************ 00:05:41.418 18:26:27 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.418 18:26:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.418 18:26:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.418 18:26:27 -- common/autotest_common.sh@10 -- # set +x 00:05:41.418 ************************************ 00:05:41.418 START TEST rpc 00:05:41.418 ************************************ 00:05:41.418 18:26:27 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.686 * Looking for test storage... 00:05:41.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.686 18:26:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.686 18:26:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.686 18:26:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.686 18:26:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.686 18:26:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.686 18:26:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.686 18:26:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.686 18:26:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.686 18:26:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.686 18:26:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.686 18:26:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.686 18:26:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:41.686 18:26:28 rpc -- scripts/common.sh@345 -- # : 1 00:05:41.686 18:26:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.686 18:26:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.686 18:26:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:41.686 18:26:28 rpc -- scripts/common.sh@353 -- # local d=1 00:05:41.686 18:26:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.686 18:26:28 rpc -- scripts/common.sh@355 -- # echo 1 00:05:41.686 18:26:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.686 18:26:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:41.686 18:26:28 rpc -- scripts/common.sh@353 -- # local d=2 00:05:41.686 18:26:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.686 18:26:28 rpc -- scripts/common.sh@355 -- # echo 2 00:05:41.686 18:26:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.686 18:26:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.686 18:26:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.686 18:26:28 rpc -- scripts/common.sh@368 -- # return 0 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.686 --rc genhtml_branch_coverage=1 00:05:41.686 --rc genhtml_function_coverage=1 00:05:41.686 --rc genhtml_legend=1 00:05:41.686 --rc geninfo_all_blocks=1 00:05:41.686 --rc geninfo_unexecuted_blocks=1 00:05:41.686 00:05:41.686 ' 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.686 --rc genhtml_branch_coverage=1 00:05:41.686 --rc genhtml_function_coverage=1 00:05:41.686 --rc genhtml_legend=1 00:05:41.686 --rc geninfo_all_blocks=1 00:05:41.686 --rc geninfo_unexecuted_blocks=1 00:05:41.686 00:05:41.686 ' 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.686 --rc genhtml_branch_coverage=1 00:05:41.686 --rc genhtml_function_coverage=1 00:05:41.686 --rc genhtml_legend=1 00:05:41.686 --rc geninfo_all_blocks=1 00:05:41.686 --rc geninfo_unexecuted_blocks=1 00:05:41.686 00:05:41.686 ' 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.686 --rc genhtml_branch_coverage=1 00:05:41.686 --rc genhtml_function_coverage=1 00:05:41.686 --rc genhtml_legend=1 00:05:41.686 --rc geninfo_all_blocks=1 00:05:41.686 --rc geninfo_unexecuted_blocks=1 00:05:41.686 00:05:41.686 ' 00:05:41.686 18:26:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=590873 00:05:41.686 18:26:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:41.686 18:26:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.686 18:26:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 590873 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@835 -- # '[' -z 590873 ']' 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.686 18:26:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.686 [2024-11-17 18:26:28.177012] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:41.686 [2024-11-17 18:26:28.177102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid590873 ] 00:05:41.686 [2024-11-17 18:26:28.245581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.994 [2024-11-17 18:26:28.293783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.995 [2024-11-17 18:26:28.293840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 590873' to capture a snapshot of events at runtime. 00:05:41.995 [2024-11-17 18:26:28.293853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.995 [2024-11-17 18:26:28.293864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.995 [2024-11-17 18:26:28.293873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid590873 for offline analysis/debug. 00:05:41.995 [2024-11-17 18:26:28.294431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.995 18:26:28 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.995 18:26:28 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:41.995 18:26:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.995 18:26:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.995 18:26:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.995 18:26:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.995 18:26:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.995 18:26:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.995 18:26:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.267 ************************************ 00:05:42.267 START TEST rpc_integrity 00:05:42.267 ************************************ 00:05:42.267 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:42.267 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.267 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.267 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.267 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.267 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.267 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.267 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.267 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.267 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.267 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.267 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.267 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:42.267 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.267 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.267 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.267 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.267 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.267 { 00:05:42.267 "name": "Malloc0", 00:05:42.267 "aliases": [ 00:05:42.267 "025575df-db21-457c-8c06-74ef30ada6d5" 00:05:42.267 ], 00:05:42.267 "product_name": "Malloc disk", 00:05:42.267 "block_size": 512, 00:05:42.267 "num_blocks": 16384, 00:05:42.267 "uuid": "025575df-db21-457c-8c06-74ef30ada6d5", 00:05:42.267 "assigned_rate_limits": { 00:05:42.267 "rw_ios_per_sec": 0, 00:05:42.267 "rw_mbytes_per_sec": 0, 00:05:42.267 "r_mbytes_per_sec": 0, 00:05:42.267 "w_mbytes_per_sec": 0 00:05:42.267 }, 00:05:42.267 "claimed": false, 00:05:42.267 "zoned": false, 00:05:42.267 "supported_io_types": { 00:05:42.267 "read": true, 00:05:42.267 "write": true, 00:05:42.267 "unmap": true, 00:05:42.267 "flush": true, 00:05:42.267 "reset": true, 00:05:42.267 "nvme_admin": false, 00:05:42.267 "nvme_io": false, 00:05:42.267 "nvme_io_md": false, 00:05:42.267 "write_zeroes": true, 00:05:42.267 "zcopy": true, 00:05:42.267 "get_zone_info": false, 00:05:42.267 "zone_management": false, 00:05:42.267 "zone_append": false, 00:05:42.267 "compare": false, 00:05:42.267 "compare_and_write": false, 00:05:42.267 "abort": true, 00:05:42.267 "seek_hole": false, 00:05:42.267 "seek_data": false, 00:05:42.267 "copy": true, 00:05:42.267 "nvme_iov_md": false 00:05:42.267 }, 00:05:42.267 "memory_domains": [ 00:05:42.267 { 00:05:42.267 "dma_device_id": "system", 00:05:42.268 "dma_device_type": 1 00:05:42.268 }, 00:05:42.268 { 00:05:42.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.268 "dma_device_type": 2 00:05:42.268 } 00:05:42.268 ], 00:05:42.268 "driver_specific": {} 00:05:42.268 } 00:05:42.268 ]' 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.268 [2024-11-17 18:26:28.676219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:42.268 [2024-11-17 18:26:28.676264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.268 [2024-11-17 18:26:28.676293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2043b80 00:05:42.268 [2024-11-17 18:26:28.676312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.268 [2024-11-17 18:26:28.677654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.268 [2024-11-17 18:26:28.677704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.268 Passthru0 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.268 { 00:05:42.268 "name": "Malloc0", 00:05:42.268 "aliases": [ 00:05:42.268 "025575df-db21-457c-8c06-74ef30ada6d5" 00:05:42.268 ], 00:05:42.268 "product_name": "Malloc disk", 00:05:42.268 "block_size": 512, 00:05:42.268 "num_blocks": 16384, 00:05:42.268 "uuid": "025575df-db21-457c-8c06-74ef30ada6d5", 00:05:42.268 "assigned_rate_limits": { 00:05:42.268 "rw_ios_per_sec": 0, 00:05:42.268 "rw_mbytes_per_sec": 0, 00:05:42.268 "r_mbytes_per_sec": 0, 00:05:42.268 "w_mbytes_per_sec": 0 00:05:42.268 }, 00:05:42.268 "claimed": true, 00:05:42.268 "claim_type": "exclusive_write", 00:05:42.268 "zoned": false, 00:05:42.268 "supported_io_types": { 00:05:42.268 "read": true, 00:05:42.268 "write": true, 00:05:42.268 "unmap": true, 00:05:42.268 "flush": true, 00:05:42.268 "reset": true, 00:05:42.268 "nvme_admin": false, 00:05:42.268 "nvme_io": false, 00:05:42.268 "nvme_io_md": false, 00:05:42.268 "write_zeroes": true, 00:05:42.268 "zcopy": true, 00:05:42.268 "get_zone_info": false, 00:05:42.268 "zone_management": false, 00:05:42.268 "zone_append": false, 00:05:42.268 "compare": false, 00:05:42.268 "compare_and_write": false, 00:05:42.268 "abort": true, 00:05:42.268 "seek_hole": false, 00:05:42.268 "seek_data": false, 00:05:42.268 "copy": true, 00:05:42.268 "nvme_iov_md": false 00:05:42.268 }, 00:05:42.268 "memory_domains": [ 00:05:42.268 { 00:05:42.268 "dma_device_id": "system", 00:05:42.268 "dma_device_type": 1 00:05:42.268 }, 00:05:42.268 { 00:05:42.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.268 "dma_device_type": 2 00:05:42.268 } 00:05:42.268 ], 00:05:42.268 "driver_specific": {} 00:05:42.268 }, 00:05:42.268 { 00:05:42.268 "name": "Passthru0", 00:05:42.268 "aliases": [ 00:05:42.268 "6366ed76-e5c4-521b-b73f-7fb11621207f" 00:05:42.268 ], 00:05:42.268 "product_name": "passthru", 00:05:42.268 "block_size": 512, 00:05:42.268 "num_blocks": 16384, 00:05:42.268 "uuid": "6366ed76-e5c4-521b-b73f-7fb11621207f", 00:05:42.268 "assigned_rate_limits": { 00:05:42.268 "rw_ios_per_sec": 0, 00:05:42.268 "rw_mbytes_per_sec": 0, 00:05:42.268 "r_mbytes_per_sec": 0, 00:05:42.268 "w_mbytes_per_sec": 0 00:05:42.268 }, 00:05:42.268 "claimed": false, 00:05:42.268 "zoned": false, 00:05:42.268 "supported_io_types": { 00:05:42.268 "read": true, 00:05:42.268 "write": true, 00:05:42.268 "unmap": true, 00:05:42.268 "flush": true, 00:05:42.268 "reset": true, 00:05:42.268 "nvme_admin": false, 00:05:42.268 "nvme_io": false, 00:05:42.268 "nvme_io_md": false, 00:05:42.268 "write_zeroes": true, 00:05:42.268 "zcopy": true, 00:05:42.268 "get_zone_info": false, 00:05:42.268 "zone_management": false, 00:05:42.268 "zone_append": false, 00:05:42.268 "compare": false, 00:05:42.268 "compare_and_write": false, 00:05:42.268 "abort": true, 00:05:42.268 "seek_hole": false, 00:05:42.268 "seek_data": false, 00:05:42.268 "copy": true, 00:05:42.268 "nvme_iov_md": false 00:05:42.268 }, 00:05:42.268 "memory_domains": [ 00:05:42.268 { 00:05:42.268 "dma_device_id": "system", 00:05:42.268 "dma_device_type": 1 00:05:42.268 }, 00:05:42.268 { 00:05:42.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.268 "dma_device_type": 2 00:05:42.268 } 00:05:42.268 ], 00:05:42.268 "driver_specific": { 00:05:42.268 "passthru": { 00:05:42.268 "name": "Passthru0", 00:05:42.268 "base_bdev_name": "Malloc0" 00:05:42.268 } 00:05:42.268 } 00:05:42.268 } 00:05:42.268 ]' 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.268 18:26:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.268 00:05:42.268 real 0m0.219s 00:05:42.268 user 0m0.142s 00:05:42.268 sys 0m0.022s 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.268 18:26:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.268 ************************************ 00:05:42.268 END TEST rpc_integrity 00:05:42.268 ************************************ 00:05:42.268 18:26:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:42.268 18:26:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.268 18:26:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.268 18:26:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.268 ************************************ 00:05:42.268 START TEST rpc_plugins 00:05:42.268 ************************************ 00:05:42.269 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:42.269 18:26:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:42.269 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.269 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.555 18:26:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:42.555 18:26:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.555 18:26:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:42.555 { 00:05:42.555 "name": "Malloc1", 00:05:42.555 "aliases": [ 00:05:42.555 "972f5a9a-cb3d-4997-af2c-b7962cb25307" 00:05:42.555 ], 00:05:42.555 "product_name": "Malloc disk", 00:05:42.555 "block_size": 4096, 00:05:42.555 "num_blocks": 256, 00:05:42.555 "uuid": "972f5a9a-cb3d-4997-af2c-b7962cb25307", 00:05:42.555 "assigned_rate_limits": { 00:05:42.555 "rw_ios_per_sec": 0, 00:05:42.555 "rw_mbytes_per_sec": 0, 00:05:42.555 "r_mbytes_per_sec": 0, 00:05:42.555 "w_mbytes_per_sec": 0 00:05:42.555 }, 00:05:42.555 "claimed": false, 00:05:42.555 "zoned": false, 00:05:42.555 "supported_io_types": { 00:05:42.555 "read": true, 00:05:42.555 "write": true, 00:05:42.555 "unmap": true, 00:05:42.555 "flush": true, 00:05:42.555 "reset": true, 00:05:42.555 "nvme_admin": false, 00:05:42.555 "nvme_io": false, 00:05:42.555 "nvme_io_md": false, 00:05:42.555 "write_zeroes": true, 00:05:42.555 "zcopy": true, 00:05:42.555 "get_zone_info": false, 00:05:42.555 "zone_management": false, 00:05:42.555 "zone_append": false, 00:05:42.555 "compare": false, 00:05:42.555 "compare_and_write": false, 00:05:42.555 "abort": true, 00:05:42.555 "seek_hole": false, 00:05:42.555 "seek_data": false, 00:05:42.555 "copy": true, 00:05:42.555 "nvme_iov_md": false 00:05:42.555 }, 00:05:42.555 "memory_domains": [ 00:05:42.555 { 00:05:42.555 "dma_device_id": "system", 00:05:42.555 "dma_device_type": 1 00:05:42.555 }, 00:05:42.555 { 00:05:42.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.555 "dma_device_type": 2 00:05:42.555 } 00:05:42.555 ], 00:05:42.555 "driver_specific": {} 00:05:42.555 } 00:05:42.555 ]' 00:05:42.555 18:26:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:42.555 18:26:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:42.555 18:26:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.555 18:26:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.555 18:26:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:42.555 18:26:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:42.555 18:26:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.555 00:05:42.555 real 0m0.115s 00:05:42.555 user 0m0.074s 00:05:42.555 sys 0m0.009s 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.555 18:26:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.555 ************************************ 00:05:42.555 END TEST rpc_plugins 00:05:42.555 ************************************ 00:05:42.555 18:26:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.555 18:26:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.555 18:26:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.555 18:26:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.555 ************************************ 00:05:42.555 START TEST rpc_trace_cmd_test 00:05:42.555 ************************************ 00:05:42.555 18:26:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:42.555 18:26:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:42.555 18:26:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.555 18:26:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.555 18:26:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.555 18:26:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.555 18:26:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:42.555 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid590873", 00:05:42.555 "tpoint_group_mask": "0x8", 00:05:42.555 "iscsi_conn": { 00:05:42.555 "mask": "0x2", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "scsi": { 00:05:42.555 "mask": "0x4", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "bdev": { 00:05:42.555 "mask": "0x8", 00:05:42.555 "tpoint_mask": "0xffffffffffffffff" 00:05:42.555 }, 00:05:42.555 "nvmf_rdma": { 00:05:42.555 "mask": "0x10", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "nvmf_tcp": { 00:05:42.555 "mask": "0x20", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "ftl": { 00:05:42.555 "mask": "0x40", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "blobfs": { 00:05:42.555 "mask": "0x80", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "dsa": { 00:05:42.555 "mask": "0x200", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "thread": { 00:05:42.555 "mask": "0x400", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "nvme_pcie": { 00:05:42.555 "mask": "0x800", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "iaa": { 00:05:42.555 "mask": "0x1000", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "nvme_tcp": { 00:05:42.555 "mask": "0x2000", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "bdev_nvme": { 00:05:42.555 "mask": "0x4000", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "sock": { 00:05:42.555 "mask": "0x8000", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "blob": { 00:05:42.555 "mask": "0x10000", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "bdev_raid": { 00:05:42.555 "mask": "0x20000", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 }, 00:05:42.555 "scheduler": { 00:05:42.555 "mask": "0x40000", 00:05:42.555 "tpoint_mask": "0x0" 00:05:42.555 } 00:05:42.555 }' 00:05:42.555 18:26:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:42.555 18:26:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:42.555 18:26:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.555 18:26:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.555 18:26:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.555 18:26:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.555 18:26:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.836 18:26:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.836 18:26:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.836 18:26:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.836 00:05:42.836 real 0m0.194s 00:05:42.836 user 0m0.170s 00:05:42.836 sys 0m0.016s 00:05:42.836 18:26:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.836 18:26:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.836 ************************************ 00:05:42.836 END TEST rpc_trace_cmd_test 00:05:42.836 ************************************ 00:05:42.836 18:26:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.836 18:26:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.836 18:26:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.836 18:26:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.836 18:26:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.836 18:26:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.836 ************************************ 00:05:42.836 START TEST rpc_daemon_integrity 00:05:42.836 ************************************ 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.836 { 00:05:42.836 "name": "Malloc2", 00:05:42.836 "aliases": [ 00:05:42.836 "5f7b2f25-587b-498d-8ed1-dd53c6acee10" 00:05:42.836 ], 00:05:42.836 "product_name": "Malloc disk", 00:05:42.836 "block_size": 512, 00:05:42.836 "num_blocks": 16384, 00:05:42.836 "uuid": "5f7b2f25-587b-498d-8ed1-dd53c6acee10", 00:05:42.836 "assigned_rate_limits": { 00:05:42.836 "rw_ios_per_sec": 0, 00:05:42.836 "rw_mbytes_per_sec": 0, 00:05:42.836 "r_mbytes_per_sec": 0, 00:05:42.836 "w_mbytes_per_sec": 0 00:05:42.836 }, 00:05:42.836 "claimed": false, 00:05:42.836 "zoned": false, 00:05:42.836 "supported_io_types": { 00:05:42.836 "read": true, 00:05:42.836 "write": true, 00:05:42.836 "unmap": true, 00:05:42.836 "flush": true, 00:05:42.836 "reset": true, 00:05:42.836 "nvme_admin": false, 00:05:42.836 "nvme_io": false, 00:05:42.836 "nvme_io_md": false, 00:05:42.836 "write_zeroes": true, 00:05:42.836 "zcopy": true, 00:05:42.836 "get_zone_info": false, 00:05:42.836 "zone_management": false, 00:05:42.836 "zone_append": false, 00:05:42.836 "compare": false, 00:05:42.836 "compare_and_write": false, 00:05:42.836 "abort": true, 00:05:42.836 "seek_hole": false, 00:05:42.836 "seek_data": false, 00:05:42.836 "copy": true, 00:05:42.836 "nvme_iov_md": false 00:05:42.836 }, 00:05:42.836 "memory_domains": [ 00:05:42.836 { 00:05:42.836 "dma_device_id": "system", 00:05:42.836 "dma_device_type": 1 00:05:42.836 }, 00:05:42.836 { 00:05:42.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.836 "dma_device_type": 2 00:05:42.836 } 00:05:42.836 ], 00:05:42.836 "driver_specific": {} 00:05:42.836 } 00:05:42.836 ]' 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.836 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.837 [2024-11-17 18:26:29.334611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.837 [2024-11-17 18:26:29.334669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.837 [2024-11-17 18:26:29.334718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20d4790 00:05:42.837 [2024-11-17 18:26:29.334744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.837 [2024-11-17 18:26:29.336126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.837 [2024-11-17 18:26:29.336151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.837 Passthru0 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.837 { 00:05:42.837 "name": "Malloc2", 00:05:42.837 "aliases": [ 00:05:42.837 "5f7b2f25-587b-498d-8ed1-dd53c6acee10" 00:05:42.837 ], 00:05:42.837 "product_name": "Malloc disk", 00:05:42.837 "block_size": 512, 00:05:42.837 "num_blocks": 16384, 00:05:42.837 "uuid": "5f7b2f25-587b-498d-8ed1-dd53c6acee10", 00:05:42.837 "assigned_rate_limits": { 00:05:42.837 "rw_ios_per_sec": 0, 00:05:42.837 "rw_mbytes_per_sec": 0, 00:05:42.837 "r_mbytes_per_sec": 0, 00:05:42.837 "w_mbytes_per_sec": 0 00:05:42.837 }, 00:05:42.837 "claimed": true, 00:05:42.837 "claim_type": "exclusive_write", 00:05:42.837 "zoned": false, 00:05:42.837 "supported_io_types": { 00:05:42.837 "read": true, 00:05:42.837 "write": true, 00:05:42.837 "unmap": true, 00:05:42.837 "flush": true, 00:05:42.837 "reset": true, 00:05:42.837 "nvme_admin": false, 00:05:42.837 "nvme_io": false, 00:05:42.837 "nvme_io_md": false, 00:05:42.837 "write_zeroes": true, 00:05:42.837 "zcopy": true, 00:05:42.837 "get_zone_info": false, 00:05:42.837 "zone_management": false, 00:05:42.837 "zone_append": false, 00:05:42.837 "compare": false, 00:05:42.837 "compare_and_write": false, 00:05:42.837 "abort": true, 00:05:42.837 "seek_hole": false, 00:05:42.837 "seek_data": false, 00:05:42.837 "copy": true, 00:05:42.837 "nvme_iov_md": false 00:05:42.837 }, 00:05:42.837 "memory_domains": [ 00:05:42.837 { 00:05:42.837 "dma_device_id": "system", 00:05:42.837 "dma_device_type": 1 00:05:42.837 }, 00:05:42.837 { 00:05:42.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.837 "dma_device_type": 2 00:05:42.837 } 00:05:42.837 ], 00:05:42.837 "driver_specific": {} 00:05:42.837 }, 00:05:42.837 { 00:05:42.837 "name": "Passthru0", 00:05:42.837 "aliases": [ 00:05:42.837 "ea6a3774-1819-590b-9389-49f00819b949" 00:05:42.837 ], 00:05:42.837 "product_name": "passthru", 00:05:42.837 "block_size": 512, 00:05:42.837 "num_blocks": 16384, 00:05:42.837 "uuid": "ea6a3774-1819-590b-9389-49f00819b949", 00:05:42.837 "assigned_rate_limits": { 00:05:42.837 "rw_ios_per_sec": 0, 00:05:42.837 "rw_mbytes_per_sec": 0, 00:05:42.837 "r_mbytes_per_sec": 0, 00:05:42.837 "w_mbytes_per_sec": 0 00:05:42.837 }, 00:05:42.837 "claimed": false, 00:05:42.837 "zoned": false, 00:05:42.837 "supported_io_types": { 00:05:42.837 "read": true, 00:05:42.837 "write": true, 00:05:42.837 "unmap": true, 00:05:42.837 "flush": true, 00:05:42.837 "reset": true, 00:05:42.837 "nvme_admin": false, 00:05:42.837 "nvme_io": false, 00:05:42.837 "nvme_io_md": false, 00:05:42.837 "write_zeroes": true, 00:05:42.837 "zcopy": true, 00:05:42.837 "get_zone_info": false, 00:05:42.837 "zone_management": false, 00:05:42.837 "zone_append": false, 00:05:42.837 "compare": false, 00:05:42.837 "compare_and_write": false, 00:05:42.837 "abort": true, 00:05:42.837 "seek_hole": false, 00:05:42.837 "seek_data": false, 00:05:42.837 "copy": true, 00:05:42.837 "nvme_iov_md": false 00:05:42.837 }, 00:05:42.837 "memory_domains": [ 00:05:42.837 { 00:05:42.837 "dma_device_id": "system", 00:05:42.837 "dma_device_type": 1 00:05:42.837 }, 00:05:42.837 { 00:05:42.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.837 "dma_device_type": 2 00:05:42.837 } 00:05:42.837 ], 00:05:42.837 "driver_specific": { 00:05:42.837 "passthru": { 00:05:42.837 "name": "Passthru0", 00:05:42.837 "base_bdev_name": "Malloc2" 00:05:42.837 } 00:05:42.837 } 00:05:42.837 } 00:05:42.837 ]' 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.837 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.095 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:43.095 18:26:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.095 00:05:43.095 real 0m0.220s 00:05:43.095 user 0m0.142s 00:05:43.095 sys 0m0.021s 00:05:43.095 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.095 18:26:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.095 ************************************ 00:05:43.095 END TEST rpc_daemon_integrity 00:05:43.095 ************************************ 00:05:43.095 18:26:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:43.095 18:26:29 rpc -- rpc/rpc.sh@84 -- # killprocess 590873 00:05:43.095 18:26:29 rpc -- common/autotest_common.sh@954 -- # '[' -z 590873 ']' 00:05:43.095 18:26:29 rpc -- common/autotest_common.sh@958 -- # kill -0 590873 00:05:43.095 18:26:29 rpc -- common/autotest_common.sh@959 -- # uname 00:05:43.095 18:26:29 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.095 18:26:29 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 590873 00:05:43.095 18:26:29 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.095 18:26:29 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.095 18:26:29 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 590873' 00:05:43.095 killing process with pid 590873 00:05:43.095 18:26:29 rpc -- common/autotest_common.sh@973 -- # kill 590873 00:05:43.095 18:26:29 rpc -- common/autotest_common.sh@978 -- # wait 590873 00:05:43.354 00:05:43.354 real 0m1.913s 00:05:43.354 user 0m2.389s 00:05:43.354 sys 0m0.611s 00:05:43.354 18:26:29 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.354 18:26:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.354 ************************************ 00:05:43.354 END TEST rpc 00:05:43.354 ************************************ 00:05:43.354 18:26:29 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.354 18:26:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.354 18:26:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.354 18:26:29 -- common/autotest_common.sh@10 -- # set +x 00:05:43.612 ************************************ 00:05:43.612 START TEST skip_rpc 00:05:43.612 ************************************ 00:05:43.612 18:26:29 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.612 * Looking for test storage... 00:05:43.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.612 18:26:29 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.612 18:26:29 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.612 18:26:29 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.612 18:26:30 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.612 18:26:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:43.612 18:26:30 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.612 18:26:30 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.612 --rc genhtml_branch_coverage=1 00:05:43.612 --rc genhtml_function_coverage=1 00:05:43.612 --rc genhtml_legend=1 00:05:43.612 --rc geninfo_all_blocks=1 00:05:43.612 --rc geninfo_unexecuted_blocks=1 00:05:43.612 00:05:43.612 ' 00:05:43.612 18:26:30 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.612 --rc genhtml_branch_coverage=1 00:05:43.612 --rc genhtml_function_coverage=1 00:05:43.612 --rc genhtml_legend=1 00:05:43.612 --rc geninfo_all_blocks=1 00:05:43.612 --rc geninfo_unexecuted_blocks=1 00:05:43.612 00:05:43.612 ' 00:05:43.612 18:26:30 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.612 --rc genhtml_branch_coverage=1 00:05:43.612 --rc genhtml_function_coverage=1 00:05:43.612 --rc genhtml_legend=1 00:05:43.612 --rc geninfo_all_blocks=1 00:05:43.612 --rc geninfo_unexecuted_blocks=1 00:05:43.612 00:05:43.612 ' 00:05:43.612 18:26:30 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.612 --rc genhtml_branch_coverage=1 00:05:43.612 --rc genhtml_function_coverage=1 00:05:43.612 --rc genhtml_legend=1 00:05:43.612 --rc geninfo_all_blocks=1 00:05:43.612 --rc geninfo_unexecuted_blocks=1 00:05:43.612 00:05:43.612 ' 00:05:43.612 18:26:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:43.612 18:26:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.612 18:26:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.612 18:26:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.612 18:26:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.612 18:26:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.612 ************************************ 00:05:43.612 START TEST skip_rpc 00:05:43.612 ************************************ 00:05:43.612 18:26:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:43.612 18:26:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=591269 00:05:43.612 18:26:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.612 18:26:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.613 18:26:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.613 [2024-11-17 18:26:30.168227] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:43.613 [2024-11-17 18:26:30.168290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid591269 ] 00:05:43.871 [2024-11-17 18:26:30.237316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.871 [2024-11-17 18:26:30.287087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.135 18:26:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 591269 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 591269 ']' 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 591269 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 591269 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 591269' 00:05:49.136 killing process with pid 591269 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 591269 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 591269 00:05:49.136 00:05:49.136 real 0m5.420s 00:05:49.136 user 0m5.117s 00:05:49.136 sys 0m0.319s 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.136 18:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.136 ************************************ 00:05:49.136 END TEST skip_rpc 00:05:49.136 ************************************ 00:05:49.136 18:26:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:49.136 18:26:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.136 18:26:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.136 18:26:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.136 ************************************ 00:05:49.136 START TEST skip_rpc_with_json 00:05:49.136 ************************************ 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=591956 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 591956 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 591956 ']' 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.136 18:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.136 [2024-11-17 18:26:35.644973] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:49.136 [2024-11-17 18:26:35.645082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid591956 ] 00:05:49.394 [2024-11-17 18:26:35.712424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.394 [2024-11-17 18:26:35.761883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.653 [2024-11-17 18:26:36.023408] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.653 request: 00:05:49.653 { 00:05:49.653 "trtype": "tcp", 00:05:49.653 "method": "nvmf_get_transports", 00:05:49.653 "req_id": 1 00:05:49.653 } 00:05:49.653 Got JSON-RPC error response 00:05:49.653 response: 00:05:49.653 { 00:05:49.653 "code": -19, 00:05:49.653 "message": "No such device" 00:05:49.653 } 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.653 [2024-11-17 18:26:36.031514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.653 18:26:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:49.653 { 00:05:49.653 "subsystems": [ 00:05:49.653 { 00:05:49.653 "subsystem": "fsdev", 00:05:49.653 "config": [ 00:05:49.653 { 00:05:49.653 "method": "fsdev_set_opts", 00:05:49.653 "params": { 00:05:49.653 "fsdev_io_pool_size": 65535, 00:05:49.653 "fsdev_io_cache_size": 256 00:05:49.653 } 00:05:49.653 } 00:05:49.653 ] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "vfio_user_target", 00:05:49.653 "config": null 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "keyring", 00:05:49.653 "config": [] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "iobuf", 00:05:49.653 "config": [ 00:05:49.653 { 00:05:49.653 "method": "iobuf_set_options", 00:05:49.653 "params": { 00:05:49.653 "small_pool_count": 8192, 00:05:49.653 "large_pool_count": 1024, 00:05:49.653 "small_bufsize": 8192, 00:05:49.653 "large_bufsize": 135168, 00:05:49.653 "enable_numa": false 00:05:49.653 } 00:05:49.653 } 00:05:49.653 ] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "sock", 00:05:49.653 "config": [ 00:05:49.653 { 00:05:49.653 "method": "sock_set_default_impl", 00:05:49.653 "params": { 00:05:49.653 "impl_name": "posix" 00:05:49.653 } 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "method": "sock_impl_set_options", 00:05:49.653 "params": { 00:05:49.653 "impl_name": "ssl", 00:05:49.653 "recv_buf_size": 4096, 00:05:49.653 "send_buf_size": 4096, 00:05:49.653 "enable_recv_pipe": true, 00:05:49.653 "enable_quickack": false, 00:05:49.653 "enable_placement_id": 0, 00:05:49.653 "enable_zerocopy_send_server": true, 00:05:49.653 "enable_zerocopy_send_client": false, 00:05:49.653 "zerocopy_threshold": 0, 00:05:49.653 "tls_version": 0, 00:05:49.653 "enable_ktls": false 00:05:49.653 } 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "method": "sock_impl_set_options", 00:05:49.653 "params": { 00:05:49.653 "impl_name": "posix", 00:05:49.653 "recv_buf_size": 2097152, 00:05:49.653 "send_buf_size": 2097152, 00:05:49.653 "enable_recv_pipe": true, 00:05:49.653 "enable_quickack": false, 00:05:49.653 "enable_placement_id": 0, 00:05:49.653 "enable_zerocopy_send_server": true, 00:05:49.653 "enable_zerocopy_send_client": false, 00:05:49.653 "zerocopy_threshold": 0, 00:05:49.653 "tls_version": 0, 00:05:49.653 "enable_ktls": false 00:05:49.653 } 00:05:49.653 } 00:05:49.653 ] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "vmd", 00:05:49.653 "config": [] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "accel", 00:05:49.653 "config": [ 00:05:49.653 { 00:05:49.653 "method": "accel_set_options", 00:05:49.653 "params": { 00:05:49.653 "small_cache_size": 128, 00:05:49.653 "large_cache_size": 16, 00:05:49.653 "task_count": 2048, 00:05:49.653 "sequence_count": 2048, 00:05:49.653 "buf_count": 2048 00:05:49.653 } 00:05:49.653 } 00:05:49.653 ] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "bdev", 00:05:49.653 "config": [ 00:05:49.653 { 00:05:49.653 "method": "bdev_set_options", 00:05:49.653 "params": { 00:05:49.653 "bdev_io_pool_size": 65535, 00:05:49.653 "bdev_io_cache_size": 256, 00:05:49.653 "bdev_auto_examine": true, 00:05:49.653 "iobuf_small_cache_size": 128, 00:05:49.653 "iobuf_large_cache_size": 16 00:05:49.653 } 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "method": "bdev_raid_set_options", 00:05:49.653 "params": { 00:05:49.653 "process_window_size_kb": 1024, 00:05:49.653 "process_max_bandwidth_mb_sec": 0 00:05:49.653 } 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "method": "bdev_iscsi_set_options", 00:05:49.653 "params": { 00:05:49.653 "timeout_sec": 30 00:05:49.653 } 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "method": "bdev_nvme_set_options", 00:05:49.653 "params": { 00:05:49.653 "action_on_timeout": "none", 00:05:49.653 "timeout_us": 0, 00:05:49.653 "timeout_admin_us": 0, 00:05:49.653 "keep_alive_timeout_ms": 10000, 00:05:49.653 "arbitration_burst": 0, 00:05:49.653 "low_priority_weight": 0, 00:05:49.653 "medium_priority_weight": 0, 00:05:49.653 "high_priority_weight": 0, 00:05:49.653 "nvme_adminq_poll_period_us": 10000, 00:05:49.653 "nvme_ioq_poll_period_us": 0, 00:05:49.653 "io_queue_requests": 0, 00:05:49.653 "delay_cmd_submit": true, 00:05:49.653 "transport_retry_count": 4, 00:05:49.653 "bdev_retry_count": 3, 00:05:49.653 "transport_ack_timeout": 0, 00:05:49.653 "ctrlr_loss_timeout_sec": 0, 00:05:49.653 "reconnect_delay_sec": 0, 00:05:49.653 "fast_io_fail_timeout_sec": 0, 00:05:49.653 "disable_auto_failback": false, 00:05:49.653 "generate_uuids": false, 00:05:49.653 "transport_tos": 0, 00:05:49.653 "nvme_error_stat": false, 00:05:49.653 "rdma_srq_size": 0, 00:05:49.653 "io_path_stat": false, 00:05:49.653 "allow_accel_sequence": false, 00:05:49.653 "rdma_max_cq_size": 0, 00:05:49.653 "rdma_cm_event_timeout_ms": 0, 00:05:49.653 "dhchap_digests": [ 00:05:49.653 "sha256", 00:05:49.653 "sha384", 00:05:49.653 "sha512" 00:05:49.653 ], 00:05:49.653 "dhchap_dhgroups": [ 00:05:49.653 "null", 00:05:49.653 "ffdhe2048", 00:05:49.653 "ffdhe3072", 00:05:49.653 "ffdhe4096", 00:05:49.653 "ffdhe6144", 00:05:49.653 "ffdhe8192" 00:05:49.653 ] 00:05:49.653 } 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "method": "bdev_nvme_set_hotplug", 00:05:49.653 "params": { 00:05:49.653 "period_us": 100000, 00:05:49.653 "enable": false 00:05:49.653 } 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "method": "bdev_wait_for_examine" 00:05:49.653 } 00:05:49.653 ] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "scsi", 00:05:49.653 "config": null 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "scheduler", 00:05:49.653 "config": [ 00:05:49.653 { 00:05:49.653 "method": "framework_set_scheduler", 00:05:49.653 "params": { 00:05:49.653 "name": "static" 00:05:49.653 } 00:05:49.653 } 00:05:49.653 ] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "vhost_scsi", 00:05:49.653 "config": [] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "vhost_blk", 00:05:49.653 "config": [] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "ublk", 00:05:49.653 "config": [] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "nbd", 00:05:49.653 "config": [] 00:05:49.653 }, 00:05:49.653 { 00:05:49.653 "subsystem": "nvmf", 00:05:49.653 "config": [ 00:05:49.653 { 00:05:49.653 "method": "nvmf_set_config", 00:05:49.653 "params": { 00:05:49.653 "discovery_filter": "match_any", 00:05:49.653 "admin_cmd_passthru": { 00:05:49.653 "identify_ctrlr": false 00:05:49.653 }, 00:05:49.653 "dhchap_digests": [ 00:05:49.653 "sha256", 00:05:49.653 "sha384", 00:05:49.653 "sha512" 00:05:49.653 ], 00:05:49.654 "dhchap_dhgroups": [ 00:05:49.654 "null", 00:05:49.654 "ffdhe2048", 00:05:49.654 "ffdhe3072", 00:05:49.654 "ffdhe4096", 00:05:49.654 "ffdhe6144", 00:05:49.654 "ffdhe8192" 00:05:49.654 ] 00:05:49.654 } 00:05:49.654 }, 00:05:49.654 { 00:05:49.654 "method": "nvmf_set_max_subsystems", 00:05:49.654 "params": { 00:05:49.654 "max_subsystems": 1024 00:05:49.654 } 00:05:49.654 }, 00:05:49.654 { 00:05:49.654 "method": "nvmf_set_crdt", 00:05:49.654 "params": { 00:05:49.654 "crdt1": 0, 00:05:49.654 "crdt2": 0, 00:05:49.654 "crdt3": 0 00:05:49.654 } 00:05:49.654 }, 00:05:49.654 { 00:05:49.654 "method": "nvmf_create_transport", 00:05:49.654 "params": { 00:05:49.654 "trtype": "TCP", 00:05:49.654 "max_queue_depth": 128, 00:05:49.654 "max_io_qpairs_per_ctrlr": 127, 00:05:49.654 "in_capsule_data_size": 4096, 00:05:49.654 "max_io_size": 131072, 00:05:49.654 "io_unit_size": 131072, 00:05:49.654 "max_aq_depth": 128, 00:05:49.654 "num_shared_buffers": 511, 00:05:49.654 "buf_cache_size": 4294967295, 00:05:49.654 "dif_insert_or_strip": false, 00:05:49.654 "zcopy": false, 00:05:49.654 "c2h_success": true, 00:05:49.654 "sock_priority": 0, 00:05:49.654 "abort_timeout_sec": 1, 00:05:49.654 "ack_timeout": 0, 00:05:49.654 "data_wr_pool_size": 0 00:05:49.654 } 00:05:49.654 } 00:05:49.654 ] 00:05:49.654 }, 00:05:49.654 { 00:05:49.654 "subsystem": "iscsi", 00:05:49.654 "config": [ 00:05:49.654 { 00:05:49.654 "method": "iscsi_set_options", 00:05:49.654 "params": { 00:05:49.654 "node_base": "iqn.2016-06.io.spdk", 00:05:49.654 "max_sessions": 128, 00:05:49.654 "max_connections_per_session": 2, 00:05:49.654 "max_queue_depth": 64, 00:05:49.654 "default_time2wait": 2, 00:05:49.654 "default_time2retain": 20, 00:05:49.654 "first_burst_length": 8192, 00:05:49.654 "immediate_data": true, 00:05:49.654 "allow_duplicated_isid": false, 00:05:49.654 "error_recovery_level": 0, 00:05:49.654 "nop_timeout": 60, 00:05:49.654 "nop_in_interval": 30, 00:05:49.654 "disable_chap": false, 00:05:49.654 "require_chap": false, 00:05:49.654 "mutual_chap": false, 00:05:49.654 "chap_group": 0, 00:05:49.654 "max_large_datain_per_connection": 64, 00:05:49.654 "max_r2t_per_connection": 4, 00:05:49.654 "pdu_pool_size": 36864, 00:05:49.654 "immediate_data_pool_size": 16384, 00:05:49.654 "data_out_pool_size": 2048 00:05:49.654 } 00:05:49.654 } 00:05:49.654 ] 00:05:49.654 } 00:05:49.654 ] 00:05:49.654 } 00:05:49.654 18:26:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.654 18:26:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 591956 00:05:49.654 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 591956 ']' 00:05:49.654 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 591956 00:05:49.654 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:49.654 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.654 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 591956 00:05:49.912 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.912 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.912 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 591956' 00:05:49.912 killing process with pid 591956 00:05:49.912 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 591956 00:05:49.912 18:26:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 591956 00:05:50.170 18:26:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=592093 00:05:50.170 18:26:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.170 18:26:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:55.429 18:26:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 592093 00:05:55.429 18:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 592093 ']' 00:05:55.429 18:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 592093 00:05:55.429 18:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:55.429 18:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.429 18:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 592093 00:05:55.429 18:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.429 18:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.429 18:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 592093' 00:05:55.429 killing process with pid 592093 00:05:55.429 18:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 592093 00:05:55.429 18:26:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 592093 00:05:55.429 18:26:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:55.688 00:05:55.688 real 0m6.421s 00:05:55.688 user 0m6.070s 00:05:55.688 sys 0m0.680s 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.688 ************************************ 00:05:55.688 END TEST skip_rpc_with_json 00:05:55.688 ************************************ 00:05:55.688 18:26:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:55.688 18:26:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.688 18:26:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.688 18:26:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.688 ************************************ 00:05:55.688 START TEST skip_rpc_with_delay 00:05:55.688 ************************************ 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.688 [2024-11-17 18:26:42.116114] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.688 00:05:55.688 real 0m0.073s 00:05:55.688 user 0m0.043s 00:05:55.688 sys 0m0.030s 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.688 18:26:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:55.688 ************************************ 00:05:55.688 END TEST skip_rpc_with_delay 00:05:55.688 ************************************ 00:05:55.688 18:26:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:55.688 18:26:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:55.688 18:26:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:55.688 18:26:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.688 18:26:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.688 18:26:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.688 ************************************ 00:05:55.688 START TEST exit_on_failed_rpc_init 00:05:55.688 ************************************ 00:05:55.688 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:55.688 18:26:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=592809 00:05:55.688 18:26:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.688 18:26:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 592809 00:05:55.688 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 592809 ']' 00:05:55.688 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.689 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.689 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.689 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.689 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.689 [2024-11-17 18:26:42.242100] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:55.689 [2024-11-17 18:26:42.242198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid592809 ] 00:05:55.947 [2024-11-17 18:26:42.309966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.947 [2024-11-17 18:26:42.359274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.205 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:56.206 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.206 [2024-11-17 18:26:42.671247] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:56.206 [2024-11-17 18:26:42.671323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid592820 ] 00:05:56.206 [2024-11-17 18:26:42.738684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.463 [2024-11-17 18:26:42.788060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.463 [2024-11-17 18:26:42.788150] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:56.463 [2024-11-17 18:26:42.788169] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:56.463 [2024-11-17 18:26:42.788181] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 592809 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 592809 ']' 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 592809 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.463 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 592809 00:05:56.464 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.464 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.464 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 592809' 00:05:56.464 killing process with pid 592809 00:05:56.464 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 592809 00:05:56.464 18:26:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 592809 00:05:56.722 00:05:56.722 real 0m1.078s 00:05:56.722 user 0m1.158s 00:05:56.722 sys 0m0.443s 00:05:56.722 18:26:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.722 18:26:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.722 ************************************ 00:05:56.722 END TEST exit_on_failed_rpc_init 00:05:56.722 ************************************ 00:05:56.722 18:26:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.722 00:05:56.722 real 0m13.352s 00:05:56.722 user 0m12.583s 00:05:56.722 sys 0m1.655s 00:05:56.722 18:26:43 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.722 18:26:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.722 ************************************ 00:05:56.722 END TEST skip_rpc 00:05:56.722 ************************************ 00:05:56.981 18:26:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:56.981 18:26:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.981 18:26:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.981 18:26:43 -- common/autotest_common.sh@10 -- # set +x 00:05:56.981 ************************************ 00:05:56.981 START TEST rpc_client 00:05:56.981 ************************************ 00:05:56.981 18:26:43 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:56.981 * Looking for test storage... 00:05:56.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:56.981 18:26:43 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:56.981 18:26:43 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.981 18:26:43 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:56.981 18:26:43 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.981 18:26:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:56.981 18:26:43 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.981 18:26:43 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:56.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.981 --rc genhtml_branch_coverage=1 00:05:56.981 --rc genhtml_function_coverage=1 00:05:56.981 --rc genhtml_legend=1 00:05:56.981 --rc geninfo_all_blocks=1 00:05:56.981 --rc geninfo_unexecuted_blocks=1 00:05:56.981 00:05:56.981 ' 00:05:56.981 18:26:43 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:56.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.981 --rc genhtml_branch_coverage=1 00:05:56.981 --rc genhtml_function_coverage=1 00:05:56.981 --rc genhtml_legend=1 00:05:56.981 --rc geninfo_all_blocks=1 00:05:56.981 --rc geninfo_unexecuted_blocks=1 00:05:56.981 00:05:56.981 ' 00:05:56.981 18:26:43 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:56.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.981 --rc genhtml_branch_coverage=1 00:05:56.981 --rc genhtml_function_coverage=1 00:05:56.981 --rc genhtml_legend=1 00:05:56.981 --rc geninfo_all_blocks=1 00:05:56.981 --rc geninfo_unexecuted_blocks=1 00:05:56.981 00:05:56.981 ' 00:05:56.981 18:26:43 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:56.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.982 --rc genhtml_branch_coverage=1 00:05:56.982 --rc genhtml_function_coverage=1 00:05:56.982 --rc genhtml_legend=1 00:05:56.982 --rc geninfo_all_blocks=1 00:05:56.982 --rc geninfo_unexecuted_blocks=1 00:05:56.982 00:05:56.982 ' 00:05:56.982 18:26:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:56.982 OK 00:05:56.982 18:26:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:56.982 00:05:56.982 real 0m0.166s 00:05:56.982 user 0m0.110s 00:05:56.982 sys 0m0.066s 00:05:56.982 18:26:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.982 18:26:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:56.982 ************************************ 00:05:56.982 END TEST rpc_client 00:05:56.982 ************************************ 00:05:56.982 18:26:43 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:56.982 18:26:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.982 18:26:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.982 18:26:43 -- common/autotest_common.sh@10 -- # set +x 00:05:56.982 ************************************ 00:05:56.982 START TEST json_config 00:05:56.982 ************************************ 00:05:56.982 18:26:43 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:57.241 18:26:43 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.241 18:26:43 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.241 18:26:43 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.241 18:26:43 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.241 18:26:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.241 18:26:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.241 18:26:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.241 18:26:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.241 18:26:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.241 18:26:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.241 18:26:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.241 18:26:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.241 18:26:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.241 18:26:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.241 18:26:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.241 18:26:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:57.241 18:26:43 json_config -- scripts/common.sh@345 -- # : 1 00:05:57.241 18:26:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.241 18:26:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.241 18:26:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:57.241 18:26:43 json_config -- scripts/common.sh@353 -- # local d=1 00:05:57.241 18:26:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.241 18:26:43 json_config -- scripts/common.sh@355 -- # echo 1 00:05:57.241 18:26:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.241 18:26:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:57.241 18:26:43 json_config -- scripts/common.sh@353 -- # local d=2 00:05:57.241 18:26:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.241 18:26:43 json_config -- scripts/common.sh@355 -- # echo 2 00:05:57.241 18:26:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.241 18:26:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.241 18:26:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.241 18:26:43 json_config -- scripts/common.sh@368 -- # return 0 00:05:57.241 18:26:43 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.241 18:26:43 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.242 --rc genhtml_branch_coverage=1 00:05:57.242 --rc genhtml_function_coverage=1 00:05:57.242 --rc genhtml_legend=1 00:05:57.242 --rc geninfo_all_blocks=1 00:05:57.242 --rc geninfo_unexecuted_blocks=1 00:05:57.242 00:05:57.242 ' 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.242 --rc genhtml_branch_coverage=1 00:05:57.242 --rc genhtml_function_coverage=1 00:05:57.242 --rc genhtml_legend=1 00:05:57.242 --rc geninfo_all_blocks=1 00:05:57.242 --rc geninfo_unexecuted_blocks=1 00:05:57.242 00:05:57.242 ' 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.242 --rc genhtml_branch_coverage=1 00:05:57.242 --rc genhtml_function_coverage=1 00:05:57.242 --rc genhtml_legend=1 00:05:57.242 --rc geninfo_all_blocks=1 00:05:57.242 --rc geninfo_unexecuted_blocks=1 00:05:57.242 00:05:57.242 ' 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.242 --rc genhtml_branch_coverage=1 00:05:57.242 --rc genhtml_function_coverage=1 00:05:57.242 --rc genhtml_legend=1 00:05:57.242 --rc geninfo_all_blocks=1 00:05:57.242 --rc geninfo_unexecuted_blocks=1 00:05:57.242 00:05:57.242 ' 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.242 18:26:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.242 18:26:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.242 18:26:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.242 18:26:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.242 18:26:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.242 18:26:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.242 18:26:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.242 18:26:43 json_config -- paths/export.sh@5 -- # export PATH 00:05:57.242 18:26:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@51 -- # : 0 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.242 18:26:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:57.242 INFO: JSON configuration test init 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.242 18:26:43 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:57.242 18:26:43 json_config -- json_config/common.sh@9 -- # local app=target 00:05:57.242 18:26:43 json_config -- json_config/common.sh@10 -- # shift 00:05:57.242 18:26:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.242 18:26:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.242 18:26:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.242 18:26:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.242 18:26:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.242 18:26:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=593080 00:05:57.242 18:26:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:57.242 18:26:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.242 Waiting for target to run... 00:05:57.242 18:26:43 json_config -- json_config/common.sh@25 -- # waitforlisten 593080 /var/tmp/spdk_tgt.sock 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 593080 ']' 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.242 18:26:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.242 [2024-11-17 18:26:43.777115] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:05:57.242 [2024-11-17 18:26:43.777203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593080 ] 00:05:57.810 [2024-11-17 18:26:44.314169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.810 [2024-11-17 18:26:44.355629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.376 18:26:44 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.376 18:26:44 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:58.376 18:26:44 json_config -- json_config/common.sh@26 -- # echo '' 00:05:58.376 00:05:58.376 18:26:44 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:58.376 18:26:44 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:58.376 18:26:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.376 18:26:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.376 18:26:44 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:58.376 18:26:44 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:58.376 18:26:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:58.376 18:26:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.376 18:26:44 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:58.376 18:26:44 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:58.376 18:26:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:01.661 18:26:47 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:01.661 18:26:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:01.661 18:26:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.661 18:26:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.661 18:26:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:01.661 18:26:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:01.661 18:26:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:01.661 18:26:47 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:01.661 18:26:47 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:01.661 18:26:47 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:01.661 18:26:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:01.661 18:26:47 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@54 -- # sort 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:01.919 18:26:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.919 18:26:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:01.919 18:26:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.919 18:26:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:01.919 18:26:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.919 18:26:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:02.177 MallocForNvmf0 00:06:02.177 18:26:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:02.177 18:26:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:02.434 MallocForNvmf1 00:06:02.434 18:26:48 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:02.434 18:26:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:02.693 [2024-11-17 18:26:49.057780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.693 18:26:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:02.693 18:26:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:02.950 18:26:49 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:02.950 18:26:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:03.208 18:26:49 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:03.208 18:26:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:03.466 18:26:49 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.466 18:26:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.725 [2024-11-17 18:26:50.141450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:03.725 18:26:50 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:03.725 18:26:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.725 18:26:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.725 18:26:50 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:03.725 18:26:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.725 18:26:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.725 18:26:50 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:03.725 18:26:50 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.725 18:26:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.983 MallocBdevForConfigChangeCheck 00:06:03.983 18:26:50 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:03.983 18:26:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.983 18:26:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.983 18:26:50 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:03.983 18:26:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.548 18:26:50 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:04.548 INFO: shutting down applications... 00:06:04.548 18:26:50 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:04.549 18:26:50 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:04.549 18:26:50 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:04.549 18:26:50 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:06.447 Calling clear_iscsi_subsystem 00:06:06.447 Calling clear_nvmf_subsystem 00:06:06.447 Calling clear_nbd_subsystem 00:06:06.447 Calling clear_ublk_subsystem 00:06:06.447 Calling clear_vhost_blk_subsystem 00:06:06.447 Calling clear_vhost_scsi_subsystem 00:06:06.447 Calling clear_bdev_subsystem 00:06:06.447 18:26:52 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:06.447 18:26:52 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:06.447 18:26:52 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:06.447 18:26:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:06.447 18:26:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:06.447 18:26:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:06.447 18:26:52 json_config -- json_config/json_config.sh@352 -- # break 00:06:06.447 18:26:52 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:06.447 18:26:52 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:06.447 18:26:52 json_config -- json_config/common.sh@31 -- # local app=target 00:06:06.447 18:26:52 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:06.447 18:26:52 json_config -- json_config/common.sh@35 -- # [[ -n 593080 ]] 00:06:06.448 18:26:52 json_config -- json_config/common.sh@38 -- # kill -SIGINT 593080 00:06:06.448 18:26:52 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:06.448 18:26:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.448 18:26:52 json_config -- json_config/common.sh@41 -- # kill -0 593080 00:06:06.448 18:26:52 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:07.017 18:26:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:07.017 18:26:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.017 18:26:53 json_config -- json_config/common.sh@41 -- # kill -0 593080 00:06:07.017 18:26:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:07.017 18:26:53 json_config -- json_config/common.sh@43 -- # break 00:06:07.017 18:26:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:07.017 18:26:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:07.017 SPDK target shutdown done 00:06:07.017 18:26:53 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:07.017 INFO: relaunching applications... 00:06:07.017 18:26:53 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.017 18:26:53 json_config -- json_config/common.sh@9 -- # local app=target 00:06:07.017 18:26:53 json_config -- json_config/common.sh@10 -- # shift 00:06:07.017 18:26:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:07.017 18:26:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:07.017 18:26:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:07.017 18:26:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.017 18:26:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.017 18:26:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=594398 00:06:07.017 18:26:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:07.017 Waiting for target to run... 00:06:07.017 18:26:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.017 18:26:53 json_config -- json_config/common.sh@25 -- # waitforlisten 594398 /var/tmp/spdk_tgt.sock 00:06:07.017 18:26:53 json_config -- common/autotest_common.sh@835 -- # '[' -z 594398 ']' 00:06:07.017 18:26:53 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:07.017 18:26:53 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.017 18:26:53 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:07.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:07.017 18:26:53 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.017 18:26:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.017 [2024-11-17 18:26:53.544888] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:07.017 [2024-11-17 18:26:53.545000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594398 ] 00:06:07.586 [2024-11-17 18:26:54.073558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.586 [2024-11-17 18:26:54.112806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.873 [2024-11-17 18:26:57.154032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.873 [2024-11-17 18:26:57.186450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:10.873 18:26:57 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.873 18:26:57 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:10.873 18:26:57 json_config -- json_config/common.sh@26 -- # echo '' 00:06:10.873 00:06:10.873 18:26:57 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:10.873 18:26:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:10.873 INFO: Checking if target configuration is the same... 00:06:10.873 18:26:57 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.873 18:26:57 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:10.873 18:26:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.873 + '[' 2 -ne 2 ']' 00:06:10.873 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:10.873 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:10.873 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:10.873 +++ basename /dev/fd/62 00:06:10.873 ++ mktemp /tmp/62.XXX 00:06:10.873 + tmp_file_1=/tmp/62.Qta 00:06:10.873 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.873 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.873 + tmp_file_2=/tmp/spdk_tgt_config.json.1UX 00:06:10.873 + ret=0 00:06:10.873 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.131 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.131 + diff -u /tmp/62.Qta /tmp/spdk_tgt_config.json.1UX 00:06:11.131 + echo 'INFO: JSON config files are the same' 00:06:11.131 INFO: JSON config files are the same 00:06:11.131 + rm /tmp/62.Qta /tmp/spdk_tgt_config.json.1UX 00:06:11.131 + exit 0 00:06:11.131 18:26:57 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:11.131 18:26:57 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:11.131 INFO: changing configuration and checking if this can be detected... 00:06:11.131 18:26:57 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:11.131 18:26:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:11.389 18:26:57 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.389 18:26:57 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:11.389 18:26:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.647 + '[' 2 -ne 2 ']' 00:06:11.647 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:11.647 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:11.647 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.647 +++ basename /dev/fd/62 00:06:11.647 ++ mktemp /tmp/62.XXX 00:06:11.647 + tmp_file_1=/tmp/62.Leg 00:06:11.647 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.647 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:11.647 + tmp_file_2=/tmp/spdk_tgt_config.json.hlu 00:06:11.647 + ret=0 00:06:11.647 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.906 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.906 + diff -u /tmp/62.Leg /tmp/spdk_tgt_config.json.hlu 00:06:11.906 + ret=1 00:06:11.906 + echo '=== Start of file: /tmp/62.Leg ===' 00:06:11.906 + cat /tmp/62.Leg 00:06:11.906 + echo '=== End of file: /tmp/62.Leg ===' 00:06:11.906 + echo '' 00:06:11.906 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hlu ===' 00:06:11.906 + cat /tmp/spdk_tgt_config.json.hlu 00:06:11.906 + echo '=== End of file: /tmp/spdk_tgt_config.json.hlu ===' 00:06:11.906 + echo '' 00:06:11.906 + rm /tmp/62.Leg /tmp/spdk_tgt_config.json.hlu 00:06:11.906 + exit 1 00:06:11.906 18:26:58 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:11.906 INFO: configuration change detected. 00:06:11.906 18:26:58 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:11.906 18:26:58 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:11.906 18:26:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.906 18:26:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.906 18:26:58 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:11.906 18:26:58 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:11.906 18:26:58 json_config -- json_config/json_config.sh@324 -- # [[ -n 594398 ]] 00:06:11.906 18:26:58 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:11.906 18:26:58 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:11.906 18:26:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.906 18:26:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.906 18:26:58 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:11.906 18:26:58 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:11.906 18:26:58 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:11.907 18:26:58 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:11.907 18:26:58 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:11.907 18:26:58 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:11.907 18:26:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:11.907 18:26:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.907 18:26:58 json_config -- json_config/json_config.sh@330 -- # killprocess 594398 00:06:11.907 18:26:58 json_config -- common/autotest_common.sh@954 -- # '[' -z 594398 ']' 00:06:11.907 18:26:58 json_config -- common/autotest_common.sh@958 -- # kill -0 594398 00:06:11.907 18:26:58 json_config -- common/autotest_common.sh@959 -- # uname 00:06:11.907 18:26:58 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.907 18:26:58 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 594398 00:06:12.166 18:26:58 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.166 18:26:58 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.166 18:26:58 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 594398' 00:06:12.166 killing process with pid 594398 00:06:12.166 18:26:58 json_config -- common/autotest_common.sh@973 -- # kill 594398 00:06:12.166 18:26:58 json_config -- common/autotest_common.sh@978 -- # wait 594398 00:06:13.547 18:27:00 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.548 18:27:00 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:13.548 18:27:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.548 18:27:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.807 18:27:00 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:13.807 18:27:00 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:13.807 INFO: Success 00:06:13.807 00:06:13.807 real 0m16.575s 00:06:13.807 user 0m18.553s 00:06:13.807 sys 0m2.264s 00:06:13.807 18:27:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.807 18:27:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.807 ************************************ 00:06:13.807 END TEST json_config 00:06:13.807 ************************************ 00:06:13.807 18:27:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:13.807 18:27:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.807 18:27:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.807 18:27:00 -- common/autotest_common.sh@10 -- # set +x 00:06:13.807 ************************************ 00:06:13.807 START TEST json_config_extra_key 00:06:13.807 ************************************ 00:06:13.807 18:27:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:13.807 18:27:00 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.807 18:27:00 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.807 18:27:00 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.807 18:27:00 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:13.807 18:27:00 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.807 18:27:00 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.807 --rc genhtml_branch_coverage=1 00:06:13.807 --rc genhtml_function_coverage=1 00:06:13.807 --rc genhtml_legend=1 00:06:13.807 --rc geninfo_all_blocks=1 00:06:13.807 --rc geninfo_unexecuted_blocks=1 00:06:13.807 00:06:13.807 ' 00:06:13.807 18:27:00 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.807 --rc genhtml_branch_coverage=1 00:06:13.807 --rc genhtml_function_coverage=1 00:06:13.807 --rc genhtml_legend=1 00:06:13.807 --rc geninfo_all_blocks=1 00:06:13.807 --rc geninfo_unexecuted_blocks=1 00:06:13.807 00:06:13.807 ' 00:06:13.807 18:27:00 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.807 --rc genhtml_branch_coverage=1 00:06:13.807 --rc genhtml_function_coverage=1 00:06:13.807 --rc genhtml_legend=1 00:06:13.807 --rc geninfo_all_blocks=1 00:06:13.807 --rc geninfo_unexecuted_blocks=1 00:06:13.807 00:06:13.807 ' 00:06:13.807 18:27:00 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.807 --rc genhtml_branch_coverage=1 00:06:13.807 --rc genhtml_function_coverage=1 00:06:13.807 --rc genhtml_legend=1 00:06:13.807 --rc geninfo_all_blocks=1 00:06:13.807 --rc geninfo_unexecuted_blocks=1 00:06:13.807 00:06:13.807 ' 00:06:13.807 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.807 18:27:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.807 18:27:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.807 18:27:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.808 18:27:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.808 18:27:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.808 18:27:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:13.808 18:27:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.808 18:27:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:13.808 18:27:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.808 18:27:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.808 18:27:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.808 18:27:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.808 18:27:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.808 18:27:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.808 18:27:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.808 18:27:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.808 18:27:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:13.808 INFO: launching applications... 00:06:13.808 18:27:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:13.808 18:27:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:13.808 18:27:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:13.808 18:27:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:13.808 18:27:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:13.808 18:27:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:13.808 18:27:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.808 18:27:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.808 18:27:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=595352 00:06:13.808 18:27:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:13.808 18:27:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:13.808 Waiting for target to run... 00:06:13.808 18:27:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 595352 /var/tmp/spdk_tgt.sock 00:06:13.808 18:27:00 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 595352 ']' 00:06:13.808 18:27:00 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.808 18:27:00 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.808 18:27:00 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.808 18:27:00 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.808 18:27:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.808 [2024-11-17 18:27:00.375904] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:13.808 [2024-11-17 18:27:00.376000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595352 ] 00:06:14.374 [2024-11-17 18:27:00.736373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.374 [2024-11-17 18:27:00.770441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.941 18:27:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.941 18:27:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:14.941 18:27:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:14.941 00:06:14.941 18:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:14.941 INFO: shutting down applications... 00:06:14.941 18:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:14.941 18:27:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:14.941 18:27:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:14.941 18:27:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 595352 ]] 00:06:14.941 18:27:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 595352 00:06:14.941 18:27:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:14.941 18:27:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.941 18:27:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 595352 00:06:14.941 18:27:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.509 18:27:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.509 18:27:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.509 18:27:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 595352 00:06:15.509 18:27:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.509 18:27:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:15.509 18:27:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.509 18:27:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.509 SPDK target shutdown done 00:06:15.509 18:27:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:15.509 Success 00:06:15.509 00:06:15.509 real 0m1.681s 00:06:15.509 user 0m1.615s 00:06:15.509 sys 0m0.451s 00:06:15.509 18:27:01 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.509 18:27:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.509 ************************************ 00:06:15.509 END TEST json_config_extra_key 00:06:15.509 ************************************ 00:06:15.509 18:27:01 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.509 18:27:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.509 18:27:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.509 18:27:01 -- common/autotest_common.sh@10 -- # set +x 00:06:15.509 ************************************ 00:06:15.509 START TEST alias_rpc 00:06:15.509 ************************************ 00:06:15.509 18:27:01 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.509 * Looking for test storage... 00:06:15.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:15.509 18:27:01 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.509 18:27:01 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.509 18:27:01 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.509 18:27:02 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.509 18:27:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:15.509 18:27:02 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.509 18:27:02 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.509 --rc genhtml_branch_coverage=1 00:06:15.510 --rc genhtml_function_coverage=1 00:06:15.510 --rc genhtml_legend=1 00:06:15.510 --rc geninfo_all_blocks=1 00:06:15.510 --rc geninfo_unexecuted_blocks=1 00:06:15.510 00:06:15.510 ' 00:06:15.510 18:27:02 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.510 --rc genhtml_branch_coverage=1 00:06:15.510 --rc genhtml_function_coverage=1 00:06:15.510 --rc genhtml_legend=1 00:06:15.510 --rc geninfo_all_blocks=1 00:06:15.510 --rc geninfo_unexecuted_blocks=1 00:06:15.510 00:06:15.510 ' 00:06:15.510 18:27:02 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.510 --rc genhtml_branch_coverage=1 00:06:15.510 --rc genhtml_function_coverage=1 00:06:15.510 --rc genhtml_legend=1 00:06:15.510 --rc geninfo_all_blocks=1 00:06:15.510 --rc geninfo_unexecuted_blocks=1 00:06:15.510 00:06:15.510 ' 00:06:15.510 18:27:02 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.510 --rc genhtml_branch_coverage=1 00:06:15.510 --rc genhtml_function_coverage=1 00:06:15.510 --rc genhtml_legend=1 00:06:15.510 --rc geninfo_all_blocks=1 00:06:15.510 --rc geninfo_unexecuted_blocks=1 00:06:15.510 00:06:15.510 ' 00:06:15.510 18:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.510 18:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=595749 00:06:15.510 18:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.510 18:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 595749 00:06:15.510 18:27:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 595749 ']' 00:06:15.510 18:27:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.510 18:27:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.510 18:27:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.510 18:27:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.510 18:27:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.769 [2024-11-17 18:27:02.113361] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:15.769 [2024-11-17 18:27:02.113454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595749 ] 00:06:15.769 [2024-11-17 18:27:02.185244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.769 [2024-11-17 18:27:02.233260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.027 18:27:02 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.027 18:27:02 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.027 18:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:16.285 18:27:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 595749 00:06:16.285 18:27:02 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 595749 ']' 00:06:16.285 18:27:02 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 595749 00:06:16.285 18:27:02 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:16.285 18:27:02 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.285 18:27:02 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 595749 00:06:16.285 18:27:02 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.285 18:27:02 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.285 18:27:02 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 595749' 00:06:16.285 killing process with pid 595749 00:06:16.285 18:27:02 alias_rpc -- common/autotest_common.sh@973 -- # kill 595749 00:06:16.285 18:27:02 alias_rpc -- common/autotest_common.sh@978 -- # wait 595749 00:06:16.852 00:06:16.852 real 0m1.273s 00:06:16.852 user 0m1.381s 00:06:16.852 sys 0m0.458s 00:06:16.852 18:27:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.852 18:27:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.852 ************************************ 00:06:16.852 END TEST alias_rpc 00:06:16.852 ************************************ 00:06:16.852 18:27:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:16.852 18:27:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:16.852 18:27:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.852 18:27:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.852 18:27:03 -- common/autotest_common.sh@10 -- # set +x 00:06:16.852 ************************************ 00:06:16.852 START TEST spdkcli_tcp 00:06:16.852 ************************************ 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:16.852 * Looking for test storage... 00:06:16.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.852 18:27:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.852 --rc genhtml_branch_coverage=1 00:06:16.852 --rc genhtml_function_coverage=1 00:06:16.852 --rc genhtml_legend=1 00:06:16.852 --rc geninfo_all_blocks=1 00:06:16.852 --rc geninfo_unexecuted_blocks=1 00:06:16.852 00:06:16.852 ' 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.852 --rc genhtml_branch_coverage=1 00:06:16.852 --rc genhtml_function_coverage=1 00:06:16.852 --rc genhtml_legend=1 00:06:16.852 --rc geninfo_all_blocks=1 00:06:16.852 --rc geninfo_unexecuted_blocks=1 00:06:16.852 00:06:16.852 ' 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.852 --rc genhtml_branch_coverage=1 00:06:16.852 --rc genhtml_function_coverage=1 00:06:16.852 --rc genhtml_legend=1 00:06:16.852 --rc geninfo_all_blocks=1 00:06:16.852 --rc geninfo_unexecuted_blocks=1 00:06:16.852 00:06:16.852 ' 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.852 --rc genhtml_branch_coverage=1 00:06:16.852 --rc genhtml_function_coverage=1 00:06:16.852 --rc genhtml_legend=1 00:06:16.852 --rc geninfo_all_blocks=1 00:06:16.852 --rc geninfo_unexecuted_blocks=1 00:06:16.852 00:06:16.852 ' 00:06:16.852 18:27:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:16.852 18:27:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:16.852 18:27:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:16.852 18:27:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:16.852 18:27:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:16.852 18:27:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:16.852 18:27:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.852 18:27:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=595948 00:06:16.852 18:27:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:16.852 18:27:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 595948 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 595948 ']' 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.852 18:27:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.111 [2024-11-17 18:27:03.436429] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:17.111 [2024-11-17 18:27:03.436521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595948 ] 00:06:17.111 [2024-11-17 18:27:03.506775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.111 [2024-11-17 18:27:03.557099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.111 [2024-11-17 18:27:03.557104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.368 18:27:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.368 18:27:03 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:17.368 18:27:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=595959 00:06:17.368 18:27:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:17.368 18:27:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:17.626 [ 00:06:17.627 "bdev_malloc_delete", 00:06:17.627 "bdev_malloc_create", 00:06:17.627 "bdev_null_resize", 00:06:17.627 "bdev_null_delete", 00:06:17.627 "bdev_null_create", 00:06:17.627 "bdev_nvme_cuse_unregister", 00:06:17.627 "bdev_nvme_cuse_register", 00:06:17.627 "bdev_opal_new_user", 00:06:17.627 "bdev_opal_set_lock_state", 00:06:17.627 "bdev_opal_delete", 00:06:17.627 "bdev_opal_get_info", 00:06:17.627 "bdev_opal_create", 00:06:17.627 "bdev_nvme_opal_revert", 00:06:17.627 "bdev_nvme_opal_init", 00:06:17.627 "bdev_nvme_send_cmd", 00:06:17.627 "bdev_nvme_set_keys", 00:06:17.627 "bdev_nvme_get_path_iostat", 00:06:17.627 "bdev_nvme_get_mdns_discovery_info", 00:06:17.627 "bdev_nvme_stop_mdns_discovery", 00:06:17.627 "bdev_nvme_start_mdns_discovery", 00:06:17.627 "bdev_nvme_set_multipath_policy", 00:06:17.627 "bdev_nvme_set_preferred_path", 00:06:17.627 "bdev_nvme_get_io_paths", 00:06:17.627 "bdev_nvme_remove_error_injection", 00:06:17.627 "bdev_nvme_add_error_injection", 00:06:17.627 "bdev_nvme_get_discovery_info", 00:06:17.627 "bdev_nvme_stop_discovery", 00:06:17.627 "bdev_nvme_start_discovery", 00:06:17.627 "bdev_nvme_get_controller_health_info", 00:06:17.627 "bdev_nvme_disable_controller", 00:06:17.627 "bdev_nvme_enable_controller", 00:06:17.627 "bdev_nvme_reset_controller", 00:06:17.627 "bdev_nvme_get_transport_statistics", 00:06:17.627 "bdev_nvme_apply_firmware", 00:06:17.627 "bdev_nvme_detach_controller", 00:06:17.627 "bdev_nvme_get_controllers", 00:06:17.627 "bdev_nvme_attach_controller", 00:06:17.627 "bdev_nvme_set_hotplug", 00:06:17.627 "bdev_nvme_set_options", 00:06:17.627 "bdev_passthru_delete", 00:06:17.627 "bdev_passthru_create", 00:06:17.627 "bdev_lvol_set_parent_bdev", 00:06:17.627 "bdev_lvol_set_parent", 00:06:17.627 "bdev_lvol_check_shallow_copy", 00:06:17.627 "bdev_lvol_start_shallow_copy", 00:06:17.627 "bdev_lvol_grow_lvstore", 00:06:17.627 "bdev_lvol_get_lvols", 00:06:17.627 "bdev_lvol_get_lvstores", 00:06:17.627 "bdev_lvol_delete", 00:06:17.627 "bdev_lvol_set_read_only", 00:06:17.627 "bdev_lvol_resize", 00:06:17.627 "bdev_lvol_decouple_parent", 00:06:17.627 "bdev_lvol_inflate", 00:06:17.627 "bdev_lvol_rename", 00:06:17.627 "bdev_lvol_clone_bdev", 00:06:17.627 "bdev_lvol_clone", 00:06:17.627 "bdev_lvol_snapshot", 00:06:17.627 "bdev_lvol_create", 00:06:17.627 "bdev_lvol_delete_lvstore", 00:06:17.627 "bdev_lvol_rename_lvstore", 00:06:17.627 "bdev_lvol_create_lvstore", 00:06:17.627 "bdev_raid_set_options", 00:06:17.627 "bdev_raid_remove_base_bdev", 00:06:17.627 "bdev_raid_add_base_bdev", 00:06:17.627 "bdev_raid_delete", 00:06:17.627 "bdev_raid_create", 00:06:17.627 "bdev_raid_get_bdevs", 00:06:17.627 "bdev_error_inject_error", 00:06:17.627 "bdev_error_delete", 00:06:17.627 "bdev_error_create", 00:06:17.627 "bdev_split_delete", 00:06:17.627 "bdev_split_create", 00:06:17.627 "bdev_delay_delete", 00:06:17.627 "bdev_delay_create", 00:06:17.627 "bdev_delay_update_latency", 00:06:17.627 "bdev_zone_block_delete", 00:06:17.627 "bdev_zone_block_create", 00:06:17.627 "blobfs_create", 00:06:17.627 "blobfs_detect", 00:06:17.627 "blobfs_set_cache_size", 00:06:17.627 "bdev_aio_delete", 00:06:17.627 "bdev_aio_rescan", 00:06:17.627 "bdev_aio_create", 00:06:17.627 "bdev_ftl_set_property", 00:06:17.627 "bdev_ftl_get_properties", 00:06:17.627 "bdev_ftl_get_stats", 00:06:17.627 "bdev_ftl_unmap", 00:06:17.627 "bdev_ftl_unload", 00:06:17.627 "bdev_ftl_delete", 00:06:17.627 "bdev_ftl_load", 00:06:17.627 "bdev_ftl_create", 00:06:17.627 "bdev_virtio_attach_controller", 00:06:17.627 "bdev_virtio_scsi_get_devices", 00:06:17.627 "bdev_virtio_detach_controller", 00:06:17.627 "bdev_virtio_blk_set_hotplug", 00:06:17.627 "bdev_iscsi_delete", 00:06:17.627 "bdev_iscsi_create", 00:06:17.627 "bdev_iscsi_set_options", 00:06:17.627 "accel_error_inject_error", 00:06:17.627 "ioat_scan_accel_module", 00:06:17.627 "dsa_scan_accel_module", 00:06:17.627 "iaa_scan_accel_module", 00:06:17.627 "vfu_virtio_create_fs_endpoint", 00:06:17.627 "vfu_virtio_create_scsi_endpoint", 00:06:17.627 "vfu_virtio_scsi_remove_target", 00:06:17.627 "vfu_virtio_scsi_add_target", 00:06:17.627 "vfu_virtio_create_blk_endpoint", 00:06:17.627 "vfu_virtio_delete_endpoint", 00:06:17.627 "keyring_file_remove_key", 00:06:17.627 "keyring_file_add_key", 00:06:17.627 "keyring_linux_set_options", 00:06:17.627 "fsdev_aio_delete", 00:06:17.627 "fsdev_aio_create", 00:06:17.627 "iscsi_get_histogram", 00:06:17.627 "iscsi_enable_histogram", 00:06:17.627 "iscsi_set_options", 00:06:17.627 "iscsi_get_auth_groups", 00:06:17.627 "iscsi_auth_group_remove_secret", 00:06:17.627 "iscsi_auth_group_add_secret", 00:06:17.627 "iscsi_delete_auth_group", 00:06:17.627 "iscsi_create_auth_group", 00:06:17.627 "iscsi_set_discovery_auth", 00:06:17.627 "iscsi_get_options", 00:06:17.627 "iscsi_target_node_request_logout", 00:06:17.627 "iscsi_target_node_set_redirect", 00:06:17.627 "iscsi_target_node_set_auth", 00:06:17.627 "iscsi_target_node_add_lun", 00:06:17.627 "iscsi_get_stats", 00:06:17.627 "iscsi_get_connections", 00:06:17.627 "iscsi_portal_group_set_auth", 00:06:17.627 "iscsi_start_portal_group", 00:06:17.627 "iscsi_delete_portal_group", 00:06:17.627 "iscsi_create_portal_group", 00:06:17.627 "iscsi_get_portal_groups", 00:06:17.627 "iscsi_delete_target_node", 00:06:17.627 "iscsi_target_node_remove_pg_ig_maps", 00:06:17.627 "iscsi_target_node_add_pg_ig_maps", 00:06:17.627 "iscsi_create_target_node", 00:06:17.627 "iscsi_get_target_nodes", 00:06:17.627 "iscsi_delete_initiator_group", 00:06:17.627 "iscsi_initiator_group_remove_initiators", 00:06:17.627 "iscsi_initiator_group_add_initiators", 00:06:17.627 "iscsi_create_initiator_group", 00:06:17.627 "iscsi_get_initiator_groups", 00:06:17.627 "nvmf_set_crdt", 00:06:17.627 "nvmf_set_config", 00:06:17.627 "nvmf_set_max_subsystems", 00:06:17.627 "nvmf_stop_mdns_prr", 00:06:17.627 "nvmf_publish_mdns_prr", 00:06:17.627 "nvmf_subsystem_get_listeners", 00:06:17.627 "nvmf_subsystem_get_qpairs", 00:06:17.627 "nvmf_subsystem_get_controllers", 00:06:17.627 "nvmf_get_stats", 00:06:17.627 "nvmf_get_transports", 00:06:17.627 "nvmf_create_transport", 00:06:17.627 "nvmf_get_targets", 00:06:17.627 "nvmf_delete_target", 00:06:17.627 "nvmf_create_target", 00:06:17.627 "nvmf_subsystem_allow_any_host", 00:06:17.627 "nvmf_subsystem_set_keys", 00:06:17.627 "nvmf_subsystem_remove_host", 00:06:17.627 "nvmf_subsystem_add_host", 00:06:17.627 "nvmf_ns_remove_host", 00:06:17.627 "nvmf_ns_add_host", 00:06:17.627 "nvmf_subsystem_remove_ns", 00:06:17.627 "nvmf_subsystem_set_ns_ana_group", 00:06:17.627 "nvmf_subsystem_add_ns", 00:06:17.627 "nvmf_subsystem_listener_set_ana_state", 00:06:17.627 "nvmf_discovery_get_referrals", 00:06:17.627 "nvmf_discovery_remove_referral", 00:06:17.627 "nvmf_discovery_add_referral", 00:06:17.627 "nvmf_subsystem_remove_listener", 00:06:17.627 "nvmf_subsystem_add_listener", 00:06:17.627 "nvmf_delete_subsystem", 00:06:17.627 "nvmf_create_subsystem", 00:06:17.627 "nvmf_get_subsystems", 00:06:17.627 "env_dpdk_get_mem_stats", 00:06:17.627 "nbd_get_disks", 00:06:17.627 "nbd_stop_disk", 00:06:17.627 "nbd_start_disk", 00:06:17.627 "ublk_recover_disk", 00:06:17.627 "ublk_get_disks", 00:06:17.627 "ublk_stop_disk", 00:06:17.627 "ublk_start_disk", 00:06:17.627 "ublk_destroy_target", 00:06:17.627 "ublk_create_target", 00:06:17.627 "virtio_blk_create_transport", 00:06:17.627 "virtio_blk_get_transports", 00:06:17.627 "vhost_controller_set_coalescing", 00:06:17.627 "vhost_get_controllers", 00:06:17.627 "vhost_delete_controller", 00:06:17.627 "vhost_create_blk_controller", 00:06:17.627 "vhost_scsi_controller_remove_target", 00:06:17.627 "vhost_scsi_controller_add_target", 00:06:17.627 "vhost_start_scsi_controller", 00:06:17.627 "vhost_create_scsi_controller", 00:06:17.627 "thread_set_cpumask", 00:06:17.627 "scheduler_set_options", 00:06:17.627 "framework_get_governor", 00:06:17.627 "framework_get_scheduler", 00:06:17.627 "framework_set_scheduler", 00:06:17.627 "framework_get_reactors", 00:06:17.627 "thread_get_io_channels", 00:06:17.627 "thread_get_pollers", 00:06:17.627 "thread_get_stats", 00:06:17.627 "framework_monitor_context_switch", 00:06:17.627 "spdk_kill_instance", 00:06:17.627 "log_enable_timestamps", 00:06:17.627 "log_get_flags", 00:06:17.627 "log_clear_flag", 00:06:17.627 "log_set_flag", 00:06:17.627 "log_get_level", 00:06:17.627 "log_set_level", 00:06:17.627 "log_get_print_level", 00:06:17.627 "log_set_print_level", 00:06:17.627 "framework_enable_cpumask_locks", 00:06:17.628 "framework_disable_cpumask_locks", 00:06:17.628 "framework_wait_init", 00:06:17.628 "framework_start_init", 00:06:17.628 "scsi_get_devices", 00:06:17.628 "bdev_get_histogram", 00:06:17.628 "bdev_enable_histogram", 00:06:17.628 "bdev_set_qos_limit", 00:06:17.628 "bdev_set_qd_sampling_period", 00:06:17.628 "bdev_get_bdevs", 00:06:17.628 "bdev_reset_iostat", 00:06:17.628 "bdev_get_iostat", 00:06:17.628 "bdev_examine", 00:06:17.628 "bdev_wait_for_examine", 00:06:17.628 "bdev_set_options", 00:06:17.628 "accel_get_stats", 00:06:17.628 "accel_set_options", 00:06:17.628 "accel_set_driver", 00:06:17.628 "accel_crypto_key_destroy", 00:06:17.628 "accel_crypto_keys_get", 00:06:17.628 "accel_crypto_key_create", 00:06:17.628 "accel_assign_opc", 00:06:17.628 "accel_get_module_info", 00:06:17.628 "accel_get_opc_assignments", 00:06:17.628 "vmd_rescan", 00:06:17.628 "vmd_remove_device", 00:06:17.628 "vmd_enable", 00:06:17.628 "sock_get_default_impl", 00:06:17.628 "sock_set_default_impl", 00:06:17.628 "sock_impl_set_options", 00:06:17.628 "sock_impl_get_options", 00:06:17.628 "iobuf_get_stats", 00:06:17.628 "iobuf_set_options", 00:06:17.628 "keyring_get_keys", 00:06:17.628 "vfu_tgt_set_base_path", 00:06:17.628 "framework_get_pci_devices", 00:06:17.628 "framework_get_config", 00:06:17.628 "framework_get_subsystems", 00:06:17.628 "fsdev_set_opts", 00:06:17.628 "fsdev_get_opts", 00:06:17.628 "trace_get_info", 00:06:17.628 "trace_get_tpoint_group_mask", 00:06:17.628 "trace_disable_tpoint_group", 00:06:17.628 "trace_enable_tpoint_group", 00:06:17.628 "trace_clear_tpoint_mask", 00:06:17.628 "trace_set_tpoint_mask", 00:06:17.628 "notify_get_notifications", 00:06:17.628 "notify_get_types", 00:06:17.628 "spdk_get_version", 00:06:17.628 "rpc_get_methods" 00:06:17.628 ] 00:06:17.628 18:27:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.628 18:27:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:17.628 18:27:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 595948 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 595948 ']' 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 595948 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 595948 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 595948' 00:06:17.628 killing process with pid 595948 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 595948 00:06:17.628 18:27:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 595948 00:06:18.195 00:06:18.195 real 0m1.305s 00:06:18.195 user 0m2.328s 00:06:18.195 sys 0m0.472s 00:06:18.195 18:27:04 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.195 18:27:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.195 ************************************ 00:06:18.195 END TEST spdkcli_tcp 00:06:18.195 ************************************ 00:06:18.195 18:27:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.195 18:27:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.195 18:27:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.195 18:27:04 -- common/autotest_common.sh@10 -- # set +x 00:06:18.195 ************************************ 00:06:18.195 START TEST dpdk_mem_utility 00:06:18.195 ************************************ 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.195 * Looking for test storage... 00:06:18.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.195 18:27:04 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.195 --rc genhtml_branch_coverage=1 00:06:18.195 --rc genhtml_function_coverage=1 00:06:18.195 --rc genhtml_legend=1 00:06:18.195 --rc geninfo_all_blocks=1 00:06:18.195 --rc geninfo_unexecuted_blocks=1 00:06:18.195 00:06:18.195 ' 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.195 --rc genhtml_branch_coverage=1 00:06:18.195 --rc genhtml_function_coverage=1 00:06:18.195 --rc genhtml_legend=1 00:06:18.195 --rc geninfo_all_blocks=1 00:06:18.195 --rc geninfo_unexecuted_blocks=1 00:06:18.195 00:06:18.195 ' 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.195 --rc genhtml_branch_coverage=1 00:06:18.195 --rc genhtml_function_coverage=1 00:06:18.195 --rc genhtml_legend=1 00:06:18.195 --rc geninfo_all_blocks=1 00:06:18.195 --rc geninfo_unexecuted_blocks=1 00:06:18.195 00:06:18.195 ' 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.195 --rc genhtml_branch_coverage=1 00:06:18.195 --rc genhtml_function_coverage=1 00:06:18.195 --rc genhtml_legend=1 00:06:18.195 --rc geninfo_all_blocks=1 00:06:18.195 --rc geninfo_unexecuted_blocks=1 00:06:18.195 00:06:18.195 ' 00:06:18.195 18:27:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:18.195 18:27:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=596163 00:06:18.195 18:27:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.195 18:27:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 596163 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 596163 ']' 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.195 18:27:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.454 [2024-11-17 18:27:04.790524] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:18.454 [2024-11-17 18:27:04.790617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596163 ] 00:06:18.454 [2024-11-17 18:27:04.861110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.454 [2024-11-17 18:27:04.910036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.712 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.712 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:18.712 18:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:18.712 18:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:18.712 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.712 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.712 { 00:06:18.712 "filename": "/tmp/spdk_mem_dump.txt" 00:06:18.712 } 00:06:18.712 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.712 18:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:18.712 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:18.712 1 heaps totaling size 810.000000 MiB 00:06:18.712 size: 810.000000 MiB heap id: 0 00:06:18.712 end heaps---------- 00:06:18.712 9 mempools totaling size 595.772034 MiB 00:06:18.712 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:18.712 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:18.712 size: 92.545471 MiB name: bdev_io_596163 00:06:18.712 size: 50.003479 MiB name: msgpool_596163 00:06:18.712 size: 36.509338 MiB name: fsdev_io_596163 00:06:18.712 size: 21.763794 MiB name: PDU_Pool 00:06:18.712 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:18.712 size: 4.133484 MiB name: evtpool_596163 00:06:18.712 size: 0.026123 MiB name: Session_Pool 00:06:18.712 end mempools------- 00:06:18.712 6 memzones totaling size 4.142822 MiB 00:06:18.712 size: 1.000366 MiB name: RG_ring_0_596163 00:06:18.712 size: 1.000366 MiB name: RG_ring_1_596163 00:06:18.712 size: 1.000366 MiB name: RG_ring_4_596163 00:06:18.712 size: 1.000366 MiB name: RG_ring_5_596163 00:06:18.712 size: 0.125366 MiB name: RG_ring_2_596163 00:06:18.712 size: 0.015991 MiB name: RG_ring_3_596163 00:06:18.712 end memzones------- 00:06:18.712 18:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:18.971 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:18.971 list of free elements. size: 10.862488 MiB 00:06:18.971 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:18.971 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:18.971 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:18.971 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:18.971 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:18.971 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:18.971 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:18.971 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:18.971 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:18.971 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:18.971 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:18.971 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:18.971 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:18.971 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:18.971 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:18.971 list of standard malloc elements. size: 199.218628 MiB 00:06:18.971 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:18.971 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:18.971 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:18.971 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:18.971 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:18.971 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:18.971 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:18.971 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:18.971 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:18.971 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:18.971 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:18.971 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:18.971 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:18.971 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:18.971 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:18.971 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:18.971 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:18.971 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:18.971 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:18.971 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:18.971 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:18.971 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:18.971 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:18.971 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:18.971 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:18.971 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:18.971 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:18.971 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:18.971 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:18.971 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:18.972 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:18.972 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:18.972 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:18.972 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:18.972 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:18.972 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:18.972 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:18.972 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:18.972 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:18.972 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:18.972 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:18.972 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:18.972 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:18.972 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:18.972 list of memzone associated elements. size: 599.918884 MiB 00:06:18.972 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:18.972 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:18.972 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:18.972 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:18.972 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:18.972 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_596163_0 00:06:18.972 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:18.972 associated memzone info: size: 48.002930 MiB name: MP_msgpool_596163_0 00:06:18.972 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:18.972 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_596163_0 00:06:18.972 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:18.972 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:18.972 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:18.972 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:18.972 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:18.972 associated memzone info: size: 3.000122 MiB name: MP_evtpool_596163_0 00:06:18.972 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:18.972 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_596163 00:06:18.972 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:18.972 associated memzone info: size: 1.007996 MiB name: MP_evtpool_596163 00:06:18.972 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:18.972 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:18.972 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:18.972 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:18.972 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:18.972 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:18.972 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:18.972 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:18.972 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:18.972 associated memzone info: size: 1.000366 MiB name: RG_ring_0_596163 00:06:18.972 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:18.972 associated memzone info: size: 1.000366 MiB name: RG_ring_1_596163 00:06:18.972 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:18.972 associated memzone info: size: 1.000366 MiB name: RG_ring_4_596163 00:06:18.972 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:18.972 associated memzone info: size: 1.000366 MiB name: RG_ring_5_596163 00:06:18.972 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:18.972 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_596163 00:06:18.972 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:18.972 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_596163 00:06:18.972 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:18.972 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:18.972 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:18.972 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:18.972 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:18.972 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:18.972 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:18.972 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_596163 00:06:18.972 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:18.972 associated memzone info: size: 0.125366 MiB name: RG_ring_2_596163 00:06:18.972 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:18.972 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:18.972 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:18.972 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:18.972 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:18.972 associated memzone info: size: 0.015991 MiB name: RG_ring_3_596163 00:06:18.972 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:18.972 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:18.972 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:18.972 associated memzone info: size: 0.000183 MiB name: MP_msgpool_596163 00:06:18.972 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:18.972 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_596163 00:06:18.972 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:18.972 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_596163 00:06:18.972 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:18.972 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:18.972 18:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:18.972 18:27:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 596163 00:06:18.972 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 596163 ']' 00:06:18.972 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 596163 00:06:18.972 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:18.972 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.972 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596163 00:06:18.972 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.972 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.972 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596163' 00:06:18.972 killing process with pid 596163 00:06:18.972 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 596163 00:06:18.972 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 596163 00:06:19.231 00:06:19.231 real 0m1.133s 00:06:19.231 user 0m1.107s 00:06:19.231 sys 0m0.448s 00:06:19.232 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.232 18:27:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.232 ************************************ 00:06:19.232 END TEST dpdk_mem_utility 00:06:19.232 ************************************ 00:06:19.232 18:27:05 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:19.232 18:27:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.232 18:27:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.232 18:27:05 -- common/autotest_common.sh@10 -- # set +x 00:06:19.232 ************************************ 00:06:19.232 START TEST event 00:06:19.232 ************************************ 00:06:19.232 18:27:05 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:19.232 * Looking for test storage... 00:06:19.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:19.232 18:27:05 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.232 18:27:05 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.232 18:27:05 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.490 18:27:05 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.490 18:27:05 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.490 18:27:05 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.490 18:27:05 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.490 18:27:05 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.490 18:27:05 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.490 18:27:05 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.490 18:27:05 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.490 18:27:05 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.490 18:27:05 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.490 18:27:05 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.490 18:27:05 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.490 18:27:05 event -- scripts/common.sh@344 -- # case "$op" in 00:06:19.490 18:27:05 event -- scripts/common.sh@345 -- # : 1 00:06:19.490 18:27:05 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.490 18:27:05 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.490 18:27:05 event -- scripts/common.sh@365 -- # decimal 1 00:06:19.490 18:27:05 event -- scripts/common.sh@353 -- # local d=1 00:06:19.490 18:27:05 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.490 18:27:05 event -- scripts/common.sh@355 -- # echo 1 00:06:19.490 18:27:05 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.490 18:27:05 event -- scripts/common.sh@366 -- # decimal 2 00:06:19.490 18:27:05 event -- scripts/common.sh@353 -- # local d=2 00:06:19.490 18:27:05 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.490 18:27:05 event -- scripts/common.sh@355 -- # echo 2 00:06:19.490 18:27:05 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.490 18:27:05 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.491 18:27:05 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.491 18:27:05 event -- scripts/common.sh@368 -- # return 0 00:06:19.491 18:27:05 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.491 18:27:05 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.491 --rc genhtml_branch_coverage=1 00:06:19.491 --rc genhtml_function_coverage=1 00:06:19.491 --rc genhtml_legend=1 00:06:19.491 --rc geninfo_all_blocks=1 00:06:19.491 --rc geninfo_unexecuted_blocks=1 00:06:19.491 00:06:19.491 ' 00:06:19.491 18:27:05 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.491 --rc genhtml_branch_coverage=1 00:06:19.491 --rc genhtml_function_coverage=1 00:06:19.491 --rc genhtml_legend=1 00:06:19.491 --rc geninfo_all_blocks=1 00:06:19.491 --rc geninfo_unexecuted_blocks=1 00:06:19.491 00:06:19.491 ' 00:06:19.491 18:27:05 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.491 --rc genhtml_branch_coverage=1 00:06:19.491 --rc genhtml_function_coverage=1 00:06:19.491 --rc genhtml_legend=1 00:06:19.491 --rc geninfo_all_blocks=1 00:06:19.491 --rc geninfo_unexecuted_blocks=1 00:06:19.491 00:06:19.491 ' 00:06:19.491 18:27:05 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.491 --rc genhtml_branch_coverage=1 00:06:19.491 --rc genhtml_function_coverage=1 00:06:19.491 --rc genhtml_legend=1 00:06:19.491 --rc geninfo_all_blocks=1 00:06:19.491 --rc geninfo_unexecuted_blocks=1 00:06:19.491 00:06:19.491 ' 00:06:19.491 18:27:05 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:19.491 18:27:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:19.491 18:27:05 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:19.491 18:27:05 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:19.491 18:27:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.491 18:27:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.491 ************************************ 00:06:19.491 START TEST event_perf 00:06:19.491 ************************************ 00:06:19.491 18:27:05 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:19.491 Running I/O for 1 seconds...[2024-11-17 18:27:05.931817] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:19.491 [2024-11-17 18:27:05.931877] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596359 ] 00:06:19.491 [2024-11-17 18:27:06.002407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.491 [2024-11-17 18:27:06.056157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.491 [2024-11-17 18:27:06.056219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.491 [2024-11-17 18:27:06.056286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.491 [2024-11-17 18:27:06.056288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.867 Running I/O for 1 seconds... 00:06:20.867 lcore 0: 227143 00:06:20.867 lcore 1: 227141 00:06:20.867 lcore 2: 227142 00:06:20.867 lcore 3: 227142 00:06:20.867 done. 00:06:20.867 00:06:20.867 real 0m1.187s 00:06:20.867 user 0m4.100s 00:06:20.867 sys 0m0.082s 00:06:20.867 18:27:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.867 18:27:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.867 ************************************ 00:06:20.867 END TEST event_perf 00:06:20.867 ************************************ 00:06:20.867 18:27:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:20.867 18:27:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:20.867 18:27:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.867 18:27:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.867 ************************************ 00:06:20.867 START TEST event_reactor 00:06:20.867 ************************************ 00:06:20.867 18:27:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:20.867 [2024-11-17 18:27:07.172030] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:20.867 [2024-11-17 18:27:07.172098] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596523 ] 00:06:20.867 [2024-11-17 18:27:07.239010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.867 [2024-11-17 18:27:07.283215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.802 test_start 00:06:21.802 oneshot 00:06:21.802 tick 100 00:06:21.802 tick 100 00:06:21.802 tick 250 00:06:21.802 tick 100 00:06:21.802 tick 100 00:06:21.802 tick 100 00:06:21.802 tick 250 00:06:21.802 tick 500 00:06:21.802 tick 100 00:06:21.802 tick 100 00:06:21.802 tick 250 00:06:21.802 tick 100 00:06:21.802 tick 100 00:06:21.802 test_end 00:06:21.802 00:06:21.802 real 0m1.169s 00:06:21.802 user 0m1.099s 00:06:21.802 sys 0m0.065s 00:06:21.802 18:27:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.802 18:27:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:21.802 ************************************ 00:06:21.802 END TEST event_reactor 00:06:21.802 ************************************ 00:06:21.802 18:27:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:21.802 18:27:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:21.802 18:27:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.802 18:27:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.061 ************************************ 00:06:22.061 START TEST event_reactor_perf 00:06:22.061 ************************************ 00:06:22.061 18:27:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:22.061 [2024-11-17 18:27:08.394609] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:22.061 [2024-11-17 18:27:08.394690] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597035 ] 00:06:22.061 [2024-11-17 18:27:08.466075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.061 [2024-11-17 18:27:08.514573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.995 test_start 00:06:22.995 test_end 00:06:22.995 Performance: 432360 events per second 00:06:22.995 00:06:22.995 real 0m1.178s 00:06:22.995 user 0m1.106s 00:06:22.995 sys 0m0.067s 00:06:22.995 18:27:09 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.995 18:27:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.995 ************************************ 00:06:22.995 END TEST event_reactor_perf 00:06:22.995 ************************************ 00:06:23.254 18:27:09 event -- event/event.sh@49 -- # uname -s 00:06:23.254 18:27:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:23.254 18:27:09 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:23.254 18:27:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.254 18:27:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.254 18:27:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.254 ************************************ 00:06:23.254 START TEST event_scheduler 00:06:23.254 ************************************ 00:06:23.254 18:27:09 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:23.254 * Looking for test storage... 00:06:23.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:23.254 18:27:09 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.254 18:27:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.254 18:27:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.254 18:27:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.254 18:27:09 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.255 18:27:09 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.255 18:27:09 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:23.255 18:27:09 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.255 18:27:09 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.255 --rc genhtml_branch_coverage=1 00:06:23.255 --rc genhtml_function_coverage=1 00:06:23.255 --rc genhtml_legend=1 00:06:23.255 --rc geninfo_all_blocks=1 00:06:23.255 --rc geninfo_unexecuted_blocks=1 00:06:23.255 00:06:23.255 ' 00:06:23.255 18:27:09 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.255 --rc genhtml_branch_coverage=1 00:06:23.255 --rc genhtml_function_coverage=1 00:06:23.255 --rc genhtml_legend=1 00:06:23.255 --rc geninfo_all_blocks=1 00:06:23.255 --rc geninfo_unexecuted_blocks=1 00:06:23.255 00:06:23.255 ' 00:06:23.255 18:27:09 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.255 --rc genhtml_branch_coverage=1 00:06:23.255 --rc genhtml_function_coverage=1 00:06:23.255 --rc genhtml_legend=1 00:06:23.255 --rc geninfo_all_blocks=1 00:06:23.255 --rc geninfo_unexecuted_blocks=1 00:06:23.255 00:06:23.255 ' 00:06:23.255 18:27:09 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.255 --rc genhtml_branch_coverage=1 00:06:23.255 --rc genhtml_function_coverage=1 00:06:23.255 --rc genhtml_legend=1 00:06:23.255 --rc geninfo_all_blocks=1 00:06:23.255 --rc geninfo_unexecuted_blocks=1 00:06:23.255 00:06:23.255 ' 00:06:23.255 18:27:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:23.255 18:27:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=597368 00:06:23.255 18:27:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:23.255 18:27:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.255 18:27:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 597368 00:06:23.255 18:27:09 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 597368 ']' 00:06:23.255 18:27:09 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.255 18:27:09 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.255 18:27:09 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.255 18:27:09 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.255 18:27:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.255 [2024-11-17 18:27:09.802258] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:23.255 [2024-11-17 18:27:09.802342] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597368 ] 00:06:23.513 [2024-11-17 18:27:09.871860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.513 [2024-11-17 18:27:09.922490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.513 [2024-11-17 18:27:09.922595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.513 [2024-11-17 18:27:09.922696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.513 [2024-11-17 18:27:09.922700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.513 18:27:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.513 18:27:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:23.513 18:27:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:23.513 18:27:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.513 18:27:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.513 [2024-11-17 18:27:10.047792] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:23.513 [2024-11-17 18:27:10.047846] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:23.513 [2024-11-17 18:27:10.047866] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:23.513 [2024-11-17 18:27:10.047877] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:23.513 [2024-11-17 18:27:10.047888] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:23.513 18:27:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.513 18:27:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:23.513 18:27:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.513 18:27:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 [2024-11-17 18:27:10.147563] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:23.772 18:27:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:23.772 18:27:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.772 18:27:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 ************************************ 00:06:23.772 START TEST scheduler_create_thread 00:06:23.772 ************************************ 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 2 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 3 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 4 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 5 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 6 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 7 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 8 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 9 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 10 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.772 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.338 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.338 00:06:24.338 real 0m0.591s 00:06:24.338 user 0m0.009s 00:06:24.338 sys 0m0.004s 00:06:24.338 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.338 18:27:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.338 ************************************ 00:06:24.338 END TEST scheduler_create_thread 00:06:24.338 ************************************ 00:06:24.338 18:27:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:24.338 18:27:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 597368 00:06:24.338 18:27:10 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 597368 ']' 00:06:24.338 18:27:10 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 597368 00:06:24.338 18:27:10 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:24.338 18:27:10 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.338 18:27:10 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597368 00:06:24.338 18:27:10 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:24.338 18:27:10 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:24.338 18:27:10 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597368' 00:06:24.338 killing process with pid 597368 00:06:24.338 18:27:10 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 597368 00:06:24.338 18:27:10 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 597368 00:06:24.905 [2024-11-17 18:27:11.247708] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:24.905 00:06:24.905 real 0m1.817s 00:06:24.905 user 0m2.507s 00:06:24.905 sys 0m0.367s 00:06:24.905 18:27:11 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.905 18:27:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.905 ************************************ 00:06:24.905 END TEST event_scheduler 00:06:24.905 ************************************ 00:06:24.905 18:27:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:24.905 18:27:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:24.905 18:27:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.905 18:27:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.905 18:27:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.164 ************************************ 00:06:25.164 START TEST app_repeat 00:06:25.164 ************************************ 00:06:25.164 18:27:11 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=597684 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 597684' 00:06:25.164 Process app_repeat pid: 597684 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:25.164 spdk_app_start Round 0 00:06:25.164 18:27:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 597684 /var/tmp/spdk-nbd.sock 00:06:25.164 18:27:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 597684 ']' 00:06:25.164 18:27:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.164 18:27:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.164 18:27:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.164 18:27:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.164 18:27:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.164 [2024-11-17 18:27:11.515036] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:25.164 [2024-11-17 18:27:11.515102] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597684 ] 00:06:25.164 [2024-11-17 18:27:11.580366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.164 [2024-11-17 18:27:11.623997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.164 [2024-11-17 18:27:11.624000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.422 18:27:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.422 18:27:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:25.422 18:27:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.680 Malloc0 00:06:25.680 18:27:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.938 Malloc1 00:06:25.938 18:27:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.938 18:27:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.196 /dev/nbd0 00:06:26.196 18:27:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.196 18:27:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.196 1+0 records in 00:06:26.196 1+0 records out 00:06:26.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217207 s, 18.9 MB/s 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.196 18:27:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:26.196 18:27:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.196 18:27:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.196 18:27:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.454 /dev/nbd1 00:06:26.454 18:27:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.454 18:27:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.454 1+0 records in 00:06:26.454 1+0 records out 00:06:26.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251299 s, 16.3 MB/s 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:26.454 18:27:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.454 18:27:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.454 18:27:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:26.454 18:27:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.454 18:27:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.454 18:27:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.454 18:27:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.454 18:27:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.712 18:27:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.712 { 00:06:26.712 "nbd_device": "/dev/nbd0", 00:06:26.712 "bdev_name": "Malloc0" 00:06:26.712 }, 00:06:26.712 { 00:06:26.712 "nbd_device": "/dev/nbd1", 00:06:26.712 "bdev_name": "Malloc1" 00:06:26.712 } 00:06:26.712 ]' 00:06:26.712 18:27:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.712 { 00:06:26.712 "nbd_device": "/dev/nbd0", 00:06:26.712 "bdev_name": "Malloc0" 00:06:26.712 }, 00:06:26.712 { 00:06:26.712 "nbd_device": "/dev/nbd1", 00:06:26.712 "bdev_name": "Malloc1" 00:06:26.712 } 00:06:26.712 ]' 00:06:26.712 18:27:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.970 /dev/nbd1' 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.970 /dev/nbd1' 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.970 256+0 records in 00:06:26.970 256+0 records out 00:06:26.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501974 s, 209 MB/s 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.970 256+0 records in 00:06:26.970 256+0 records out 00:06:26.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200695 s, 52.2 MB/s 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.970 256+0 records in 00:06:26.970 256+0 records out 00:06:26.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213492 s, 49.1 MB/s 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.970 18:27:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.228 18:27:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.228 18:27:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.228 18:27:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.228 18:27:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.228 18:27:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.228 18:27:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.228 18:27:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.228 18:27:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.228 18:27:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.228 18:27:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.486 18:27:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.486 18:27:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.486 18:27:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.486 18:27:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.486 18:27:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.486 18:27:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.486 18:27:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.486 18:27:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.486 18:27:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.486 18:27:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.486 18:27:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.744 18:27:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.744 18:27:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.310 18:27:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.310 [2024-11-17 18:27:14.759402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.310 [2024-11-17 18:27:14.802752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.310 [2024-11-17 18:27:14.802753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.310 [2024-11-17 18:27:14.860439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.310 [2024-11-17 18:27:14.860507] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.588 18:27:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.588 18:27:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:31.588 spdk_app_start Round 1 00:06:31.588 18:27:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 597684 /var/tmp/spdk-nbd.sock 00:06:31.588 18:27:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 597684 ']' 00:06:31.588 18:27:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.588 18:27:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.588 18:27:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.588 18:27:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.588 18:27:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.588 18:27:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.588 18:27:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:31.588 18:27:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.588 Malloc0 00:06:31.588 18:27:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.154 Malloc1 00:06:32.154 18:27:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.154 18:27:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.411 /dev/nbd0 00:06:32.411 18:27:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.411 18:27:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.411 1+0 records in 00:06:32.411 1+0 records out 00:06:32.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022468 s, 18.2 MB/s 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.411 18:27:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.411 18:27:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.411 18:27:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.411 18:27:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.669 /dev/nbd1 00:06:32.669 18:27:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.669 18:27:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.669 1+0 records in 00:06:32.669 1+0 records out 00:06:32.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018271 s, 22.4 MB/s 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.669 18:27:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.669 18:27:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.669 18:27:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.669 18:27:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.669 18:27:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.669 18:27:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.927 { 00:06:32.927 "nbd_device": "/dev/nbd0", 00:06:32.927 "bdev_name": "Malloc0" 00:06:32.927 }, 00:06:32.927 { 00:06:32.927 "nbd_device": "/dev/nbd1", 00:06:32.927 "bdev_name": "Malloc1" 00:06:32.927 } 00:06:32.927 ]' 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.927 { 00:06:32.927 "nbd_device": "/dev/nbd0", 00:06:32.927 "bdev_name": "Malloc0" 00:06:32.927 }, 00:06:32.927 { 00:06:32.927 "nbd_device": "/dev/nbd1", 00:06:32.927 "bdev_name": "Malloc1" 00:06:32.927 } 00:06:32.927 ]' 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.927 /dev/nbd1' 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.927 /dev/nbd1' 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.927 256+0 records in 00:06:32.927 256+0 records out 00:06:32.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519545 s, 202 MB/s 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.927 256+0 records in 00:06:32.927 256+0 records out 00:06:32.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02004 s, 52.3 MB/s 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.927 256+0 records in 00:06:32.927 256+0 records out 00:06:32.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212433 s, 49.4 MB/s 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.927 18:27:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.492 18:27:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.492 18:27:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.492 18:27:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.492 18:27:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.492 18:27:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.492 18:27:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.492 18:27:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.493 18:27:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.493 18:27:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.493 18:27:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.750 18:27:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.750 18:27:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.751 18:27:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.751 18:27:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.751 18:27:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.751 18:27:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.751 18:27:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.751 18:27:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.751 18:27:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.751 18:27:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.751 18:27:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.008 18:27:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.009 18:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.009 18:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.009 18:27:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.009 18:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.009 18:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.009 18:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.009 18:27:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.009 18:27:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.009 18:27:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.009 18:27:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.009 18:27:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.009 18:27:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.267 18:27:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.525 [2024-11-17 18:27:20.910075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.525 [2024-11-17 18:27:20.953014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.525 [2024-11-17 18:27:20.953018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.525 [2024-11-17 18:27:21.011849] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.525 [2024-11-17 18:27:21.011916] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.806 18:27:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.806 18:27:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:37.806 spdk_app_start Round 2 00:06:37.806 18:27:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 597684 /var/tmp/spdk-nbd.sock 00:06:37.806 18:27:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 597684 ']' 00:06:37.806 18:27:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.806 18:27:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.806 18:27:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.806 18:27:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.806 18:27:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.806 18:27:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.806 18:27:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:37.806 18:27:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.806 Malloc0 00:06:37.806 18:27:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.064 Malloc1 00:06:38.064 18:27:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.064 18:27:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.064 18:27:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.064 18:27:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.064 18:27:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.064 18:27:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.064 18:27:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.064 18:27:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.064 18:27:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.064 18:27:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.065 18:27:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.065 18:27:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.065 18:27:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.065 18:27:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.065 18:27:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.065 18:27:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.323 /dev/nbd0 00:06:38.323 18:27:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.323 18:27:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.323 18:27:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:38.323 18:27:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:38.323 18:27:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.323 18:27:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.323 18:27:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:38.323 18:27:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:38.323 18:27:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.323 18:27:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.323 18:27:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.323 1+0 records in 00:06:38.323 1+0 records out 00:06:38.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171894 s, 23.8 MB/s 00:06:38.323 18:27:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.581 18:27:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:38.581 18:27:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.581 18:27:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.582 18:27:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:38.582 18:27:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.582 18:27:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.582 18:27:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.840 /dev/nbd1 00:06:38.840 18:27:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.840 18:27:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.840 1+0 records in 00:06:38.840 1+0 records out 00:06:38.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226884 s, 18.1 MB/s 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:38.840 18:27:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:38.840 18:27:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.840 18:27:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.840 18:27:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.840 18:27:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.840 18:27:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.099 { 00:06:39.099 "nbd_device": "/dev/nbd0", 00:06:39.099 "bdev_name": "Malloc0" 00:06:39.099 }, 00:06:39.099 { 00:06:39.099 "nbd_device": "/dev/nbd1", 00:06:39.099 "bdev_name": "Malloc1" 00:06:39.099 } 00:06:39.099 ]' 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.099 { 00:06:39.099 "nbd_device": "/dev/nbd0", 00:06:39.099 "bdev_name": "Malloc0" 00:06:39.099 }, 00:06:39.099 { 00:06:39.099 "nbd_device": "/dev/nbd1", 00:06:39.099 "bdev_name": "Malloc1" 00:06:39.099 } 00:06:39.099 ]' 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.099 /dev/nbd1' 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.099 /dev/nbd1' 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.099 256+0 records in 00:06:39.099 256+0 records out 00:06:39.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043823 s, 239 MB/s 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.099 256+0 records in 00:06:39.099 256+0 records out 00:06:39.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212863 s, 49.3 MB/s 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.099 256+0 records in 00:06:39.099 256+0 records out 00:06:39.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226839 s, 46.2 MB/s 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.099 18:27:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.357 18:27:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.357 18:27:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.357 18:27:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.357 18:27:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.357 18:27:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.357 18:27:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.357 18:27:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.357 18:27:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.357 18:27:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.357 18:27:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.614 18:27:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.614 18:27:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.614 18:27:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.615 18:27:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.615 18:27:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.615 18:27:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.615 18:27:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.615 18:27:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.872 18:27:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.872 18:27:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.872 18:27:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.130 18:27:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.130 18:27:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.387 18:27:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.646 [2024-11-17 18:27:27.001468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.646 [2024-11-17 18:27:27.044542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.646 [2024-11-17 18:27:27.044542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.646 [2024-11-17 18:27:27.103621] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.646 [2024-11-17 18:27:27.103724] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.264 18:27:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 597684 /var/tmp/spdk-nbd.sock 00:06:43.264 18:27:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 597684 ']' 00:06:43.264 18:27:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.264 18:27:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.264 18:27:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.264 18:27:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.264 18:27:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:43.833 18:27:30 event.app_repeat -- event/event.sh@39 -- # killprocess 597684 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 597684 ']' 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 597684 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597684 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597684' 00:06:43.833 killing process with pid 597684 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 597684 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 597684 00:06:43.833 spdk_app_start is called in Round 0. 00:06:43.833 Shutdown signal received, stop current app iteration 00:06:43.833 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:43.833 spdk_app_start is called in Round 1. 00:06:43.833 Shutdown signal received, stop current app iteration 00:06:43.833 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:43.833 spdk_app_start is called in Round 2. 00:06:43.833 Shutdown signal received, stop current app iteration 00:06:43.833 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 reinitialization... 00:06:43.833 spdk_app_start is called in Round 3. 00:06:43.833 Shutdown signal received, stop current app iteration 00:06:43.833 18:27:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:43.833 18:27:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:43.833 00:06:43.833 real 0m18.829s 00:06:43.833 user 0m41.700s 00:06:43.833 sys 0m3.207s 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.833 18:27:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.833 ************************************ 00:06:43.833 END TEST app_repeat 00:06:43.833 ************************************ 00:06:43.833 18:27:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:43.833 18:27:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:43.833 18:27:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.833 18:27:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.833 18:27:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.833 ************************************ 00:06:43.833 START TEST cpu_locks 00:06:43.833 ************************************ 00:06:43.833 18:27:30 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:44.093 * Looking for test storage... 00:06:44.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.093 18:27:30 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.093 --rc genhtml_branch_coverage=1 00:06:44.093 --rc genhtml_function_coverage=1 00:06:44.093 --rc genhtml_legend=1 00:06:44.093 --rc geninfo_all_blocks=1 00:06:44.093 --rc geninfo_unexecuted_blocks=1 00:06:44.093 00:06:44.093 ' 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.093 --rc genhtml_branch_coverage=1 00:06:44.093 --rc genhtml_function_coverage=1 00:06:44.093 --rc genhtml_legend=1 00:06:44.093 --rc geninfo_all_blocks=1 00:06:44.093 --rc geninfo_unexecuted_blocks=1 00:06:44.093 00:06:44.093 ' 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.093 --rc genhtml_branch_coverage=1 00:06:44.093 --rc genhtml_function_coverage=1 00:06:44.093 --rc genhtml_legend=1 00:06:44.093 --rc geninfo_all_blocks=1 00:06:44.093 --rc geninfo_unexecuted_blocks=1 00:06:44.093 00:06:44.093 ' 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.093 --rc genhtml_branch_coverage=1 00:06:44.093 --rc genhtml_function_coverage=1 00:06:44.093 --rc genhtml_legend=1 00:06:44.093 --rc geninfo_all_blocks=1 00:06:44.093 --rc geninfo_unexecuted_blocks=1 00:06:44.093 00:06:44.093 ' 00:06:44.093 18:27:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:44.093 18:27:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:44.093 18:27:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:44.093 18:27:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.093 18:27:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.093 ************************************ 00:06:44.093 START TEST default_locks 00:06:44.093 ************************************ 00:06:44.093 18:27:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:44.093 18:27:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=600180 00:06:44.093 18:27:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.093 18:27:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 600180 00:06:44.093 18:27:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 600180 ']' 00:06:44.093 18:27:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.093 18:27:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.093 18:27:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.093 18:27:30 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.093 18:27:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.093 [2024-11-17 18:27:30.600195] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:44.093 [2024-11-17 18:27:30.600286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600180 ] 00:06:44.093 [2024-11-17 18:27:30.667988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.352 [2024-11-17 18:27:30.718501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.610 18:27:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.610 18:27:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:44.610 18:27:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 600180 00:06:44.610 18:27:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 600180 00:06:44.610 18:27:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.868 lslocks: write error 00:06:44.868 18:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 600180 00:06:44.868 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 600180 ']' 00:06:44.868 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 600180 00:06:44.868 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:44.868 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.868 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600180 00:06:44.868 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.868 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.868 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600180' 00:06:44.868 killing process with pid 600180 00:06:44.868 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 600180 00:06:44.868 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 600180 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 600180 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 600180 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 600180 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 600180 ']' 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (600180) - No such process 00:06:45.436 ERROR: process (pid: 600180) is no longer running 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.436 00:06:45.436 real 0m1.168s 00:06:45.436 user 0m1.131s 00:06:45.436 sys 0m0.534s 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.436 18:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.436 ************************************ 00:06:45.436 END TEST default_locks 00:06:45.436 ************************************ 00:06:45.436 18:27:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:45.436 18:27:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.436 18:27:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.436 18:27:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.436 ************************************ 00:06:45.436 START TEST default_locks_via_rpc 00:06:45.436 ************************************ 00:06:45.436 18:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:45.436 18:27:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=600346 00:06:45.436 18:27:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.436 18:27:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 600346 00:06:45.436 18:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 600346 ']' 00:06:45.436 18:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.436 18:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.436 18:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.436 18:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.436 18:27:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.436 [2024-11-17 18:27:31.826922] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:45.436 [2024-11-17 18:27:31.827024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600346 ] 00:06:45.436 [2024-11-17 18:27:31.893770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.436 [2024-11-17 18:27:31.942543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 600346 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 600346 00:06:45.695 18:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.952 18:27:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 600346 00:06:45.952 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 600346 ']' 00:06:45.952 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 600346 00:06:45.952 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:45.952 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.953 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600346 00:06:45.953 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.953 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.953 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600346' 00:06:45.953 killing process with pid 600346 00:06:45.953 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 600346 00:06:45.953 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 600346 00:06:46.521 00:06:46.521 real 0m1.079s 00:06:46.521 user 0m1.054s 00:06:46.521 sys 0m0.495s 00:06:46.521 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.521 18:27:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.521 ************************************ 00:06:46.521 END TEST default_locks_via_rpc 00:06:46.521 ************************************ 00:06:46.521 18:27:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:46.521 18:27:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.521 18:27:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.521 18:27:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.521 ************************************ 00:06:46.521 START TEST non_locking_app_on_locked_coremask 00:06:46.521 ************************************ 00:06:46.521 18:27:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:46.521 18:27:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=600504 00:06:46.521 18:27:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.521 18:27:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 600504 /var/tmp/spdk.sock 00:06:46.521 18:27:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 600504 ']' 00:06:46.521 18:27:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.521 18:27:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.521 18:27:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.521 18:27:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.521 18:27:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.521 [2024-11-17 18:27:32.955498] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:46.521 [2024-11-17 18:27:32.955598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600504 ] 00:06:46.521 [2024-11-17 18:27:33.021790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.521 [2024-11-17 18:27:33.071301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.779 18:27:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.779 18:27:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.779 18:27:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=600514 00:06:46.779 18:27:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:46.779 18:27:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 600514 /var/tmp/spdk2.sock 00:06:46.779 18:27:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 600514 ']' 00:06:46.779 18:27:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.779 18:27:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.779 18:27:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.779 18:27:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.779 18:27:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.037 [2024-11-17 18:27:33.381879] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:47.037 [2024-11-17 18:27:33.381958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600514 ] 00:06:47.037 [2024-11-17 18:27:33.483097] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.037 [2024-11-17 18:27:33.483124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.037 [2024-11-17 18:27:33.576727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.603 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.603 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.603 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 600504 00:06:47.603 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 600504 00:06:47.603 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.168 lslocks: write error 00:06:48.168 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 600504 00:06:48.168 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 600504 ']' 00:06:48.168 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 600504 00:06:48.168 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:48.168 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.168 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600504 00:06:48.168 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.168 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.168 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600504' 00:06:48.168 killing process with pid 600504 00:06:48.168 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 600504 00:06:48.168 18:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 600504 00:06:49.102 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 600514 00:06:49.102 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 600514 ']' 00:06:49.102 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 600514 00:06:49.102 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.102 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.102 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600514 00:06:49.102 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.102 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.102 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600514' 00:06:49.102 killing process with pid 600514 00:06:49.102 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 600514 00:06:49.102 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 600514 00:06:49.360 00:06:49.360 real 0m2.862s 00:06:49.360 user 0m2.921s 00:06:49.360 sys 0m0.984s 00:06:49.360 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.360 18:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.360 ************************************ 00:06:49.360 END TEST non_locking_app_on_locked_coremask 00:06:49.360 ************************************ 00:06:49.360 18:27:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:49.360 18:27:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.360 18:27:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.360 18:27:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.360 ************************************ 00:06:49.360 START TEST locking_app_on_unlocked_coremask 00:06:49.360 ************************************ 00:06:49.360 18:27:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:49.360 18:27:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=600934 00:06:49.360 18:27:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:49.360 18:27:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 600934 /var/tmp/spdk.sock 00:06:49.360 18:27:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 600934 ']' 00:06:49.360 18:27:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.360 18:27:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.360 18:27:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.360 18:27:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.360 18:27:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.360 [2024-11-17 18:27:35.870136] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:49.360 [2024-11-17 18:27:35.870255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600934 ] 00:06:49.619 [2024-11-17 18:27:35.936581] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.619 [2024-11-17 18:27:35.936635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.619 [2024-11-17 18:27:35.978720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.877 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.877 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:49.877 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=600937 00:06:49.877 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.877 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 600937 /var/tmp/spdk2.sock 00:06:49.877 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 600937 ']' 00:06:49.877 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.877 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.877 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.877 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.877 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.877 [2024-11-17 18:27:36.278458] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:49.877 [2024-11-17 18:27:36.278544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600937 ] 00:06:49.877 [2024-11-17 18:27:36.376888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.136 [2024-11-17 18:27:36.470265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.395 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.395 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.395 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 600937 00:06:50.395 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 600937 00:06:50.395 18:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.960 lslocks: write error 00:06:50.960 18:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 600934 00:06:50.960 18:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 600934 ']' 00:06:50.960 18:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 600934 00:06:50.960 18:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:50.960 18:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.960 18:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600934 00:06:50.960 18:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.960 18:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.960 18:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600934' 00:06:50.960 killing process with pid 600934 00:06:50.960 18:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 600934 00:06:50.960 18:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 600934 00:06:51.894 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 600937 00:06:51.895 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 600937 ']' 00:06:51.895 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 600937 00:06:51.895 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.895 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.895 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 600937 00:06:51.895 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.895 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.895 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 600937' 00:06:51.895 killing process with pid 600937 00:06:51.895 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 600937 00:06:51.895 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 600937 00:06:52.155 00:06:52.155 real 0m2.861s 00:06:52.155 user 0m2.902s 00:06:52.155 sys 0m1.000s 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.155 ************************************ 00:06:52.155 END TEST locking_app_on_unlocked_coremask 00:06:52.155 ************************************ 00:06:52.155 18:27:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:52.155 18:27:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.155 18:27:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.155 18:27:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.155 ************************************ 00:06:52.155 START TEST locking_app_on_locked_coremask 00:06:52.155 ************************************ 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=601246 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 601246 /var/tmp/spdk.sock 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 601246 ']' 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.155 18:27:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.415 [2024-11-17 18:27:38.782380] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:52.415 [2024-11-17 18:27:38.782484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601246 ] 00:06:52.415 [2024-11-17 18:27:38.853053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.415 [2024-11-17 18:27:38.896089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=601368 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 601368 /var/tmp/spdk2.sock 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 601368 /var/tmp/spdk2.sock 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 601368 /var/tmp/spdk2.sock 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 601368 ']' 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.674 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.674 [2024-11-17 18:27:39.191907] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:52.674 [2024-11-17 18:27:39.192001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601368 ] 00:06:52.931 [2024-11-17 18:27:39.297928] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 601246 has claimed it. 00:06:52.931 [2024-11-17 18:27:39.297997] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (601368) - No such process 00:06:53.497 ERROR: process (pid: 601368) is no longer running 00:06:53.497 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.497 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:53.497 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:53.497 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.497 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.497 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.497 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 601246 00:06:53.497 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 601246 00:06:53.497 18:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.755 lslocks: write error 00:06:53.755 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 601246 00:06:53.755 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 601246 ']' 00:06:53.755 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 601246 00:06:53.755 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:53.755 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.755 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601246 00:06:53.755 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.755 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.755 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601246' 00:06:53.755 killing process with pid 601246 00:06:53.755 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 601246 00:06:53.755 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 601246 00:06:54.014 00:06:54.014 real 0m1.788s 00:06:54.014 user 0m2.003s 00:06:54.014 sys 0m0.583s 00:06:54.014 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.014 18:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.014 ************************************ 00:06:54.014 END TEST locking_app_on_locked_coremask 00:06:54.014 ************************************ 00:06:54.014 18:27:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:54.014 18:27:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.014 18:27:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.014 18:27:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.014 ************************************ 00:06:54.014 START TEST locking_overlapped_coremask 00:06:54.014 ************************************ 00:06:54.014 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:54.014 18:27:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=601533 00:06:54.014 18:27:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:54.014 18:27:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 601533 /var/tmp/spdk.sock 00:06:54.014 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 601533 ']' 00:06:54.014 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.014 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.014 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.014 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.014 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.273 [2024-11-17 18:27:40.623606] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:54.273 [2024-11-17 18:27:40.623713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601533 ] 00:06:54.273 [2024-11-17 18:27:40.690018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.273 [2024-11-17 18:27:40.734465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.273 [2024-11-17 18:27:40.734568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.273 [2024-11-17 18:27:40.734577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=601549 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 601549 /var/tmp/spdk2.sock 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 601549 /var/tmp/spdk2.sock 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 601549 /var/tmp/spdk2.sock 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 601549 ']' 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.531 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.532 18:27:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.532 [2024-11-17 18:27:41.051296] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:54.532 [2024-11-17 18:27:41.051382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601549 ] 00:06:54.789 [2024-11-17 18:27:41.162527] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 601533 has claimed it. 00:06:54.789 [2024-11-17 18:27:41.162578] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (601549) - No such process 00:06:55.355 ERROR: process (pid: 601549) is no longer running 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 601533 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 601533 ']' 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 601533 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601533 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601533' 00:06:55.356 killing process with pid 601533 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 601533 00:06:55.356 18:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 601533 00:06:55.615 00:06:55.615 real 0m1.613s 00:06:55.615 user 0m4.562s 00:06:55.615 sys 0m0.438s 00:06:55.615 18:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.615 18:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.615 ************************************ 00:06:55.615 END TEST locking_overlapped_coremask 00:06:55.615 ************************************ 00:06:55.873 18:27:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.873 18:27:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.873 18:27:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.873 18:27:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.873 ************************************ 00:06:55.873 START TEST locking_overlapped_coremask_via_rpc 00:06:55.873 ************************************ 00:06:55.873 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:55.873 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=601715 00:06:55.873 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.873 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 601715 /var/tmp/spdk.sock 00:06:55.873 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 601715 ']' 00:06:55.873 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.873 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.873 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.873 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.873 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.873 [2024-11-17 18:27:42.289670] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:55.873 [2024-11-17 18:27:42.289774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601715 ] 00:06:55.873 [2024-11-17 18:27:42.356606] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.873 [2024-11-17 18:27:42.356645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.873 [2024-11-17 18:27:42.408455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.873 [2024-11-17 18:27:42.408520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.873 [2024-11-17 18:27:42.408523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.131 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.131 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.131 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=601842 00:06:56.131 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 601842 /var/tmp/spdk2.sock 00:06:56.131 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 601842 ']' 00:06:56.131 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.131 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.131 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:56.131 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.131 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.131 18:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.389 [2024-11-17 18:27:42.728860] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:56.389 [2024-11-17 18:27:42.728946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601842 ] 00:06:56.389 [2024-11-17 18:27:42.835777] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.389 [2024-11-17 18:27:42.835813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.389 [2024-11-17 18:27:42.932418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.389 [2024-11-17 18:27:42.932481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.389 [2024-11-17 18:27:42.932484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.324 [2024-11-17 18:27:43.730775] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 601715 has claimed it. 00:06:57.324 request: 00:06:57.324 { 00:06:57.324 "method": "framework_enable_cpumask_locks", 00:06:57.324 "req_id": 1 00:06:57.324 } 00:06:57.324 Got JSON-RPC error response 00:06:57.324 response: 00:06:57.324 { 00:06:57.324 "code": -32603, 00:06:57.324 "message": "Failed to claim CPU core: 2" 00:06:57.324 } 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 601715 /var/tmp/spdk.sock 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 601715 ']' 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.324 18:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.582 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.582 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:57.582 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 601842 /var/tmp/spdk2.sock 00:06:57.582 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 601842 ']' 00:06:57.582 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.582 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.582 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.582 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.582 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.840 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.840 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:57.840 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:57.840 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.840 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.840 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.840 00:06:57.840 real 0m2.038s 00:06:57.840 user 0m1.161s 00:06:57.840 sys 0m0.155s 00:06:57.840 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.840 18:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.840 ************************************ 00:06:57.840 END TEST locking_overlapped_coremask_via_rpc 00:06:57.840 ************************************ 00:06:57.840 18:27:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:57.840 18:27:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 601715 ]] 00:06:57.840 18:27:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 601715 00:06:57.840 18:27:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 601715 ']' 00:06:57.840 18:27:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 601715 00:06:57.840 18:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:57.840 18:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.840 18:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601715 00:06:57.840 18:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.840 18:27:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.840 18:27:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601715' 00:06:57.840 killing process with pid 601715 00:06:57.840 18:27:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 601715 00:06:57.840 18:27:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 601715 00:06:58.406 18:27:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 601842 ]] 00:06:58.406 18:27:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 601842 00:06:58.406 18:27:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 601842 ']' 00:06:58.406 18:27:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 601842 00:06:58.406 18:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:58.406 18:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.406 18:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601842 00:06:58.406 18:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:58.406 18:27:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:58.406 18:27:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601842' 00:06:58.406 killing process with pid 601842 00:06:58.406 18:27:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 601842 00:06:58.406 18:27:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 601842 00:06:58.666 18:27:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.666 18:27:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:58.666 18:27:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 601715 ]] 00:06:58.666 18:27:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 601715 00:06:58.666 18:27:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 601715 ']' 00:06:58.666 18:27:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 601715 00:06:58.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (601715) - No such process 00:06:58.666 18:27:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 601715 is not found' 00:06:58.666 Process with pid 601715 is not found 00:06:58.666 18:27:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 601842 ]] 00:06:58.666 18:27:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 601842 00:06:58.666 18:27:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 601842 ']' 00:06:58.666 18:27:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 601842 00:06:58.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (601842) - No such process 00:06:58.666 18:27:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 601842 is not found' 00:06:58.666 Process with pid 601842 is not found 00:06:58.666 18:27:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.666 00:06:58.666 real 0m14.778s 00:06:58.666 user 0m27.171s 00:06:58.666 sys 0m5.130s 00:06:58.666 18:27:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.666 18:27:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.666 ************************************ 00:06:58.666 END TEST cpu_locks 00:06:58.666 ************************************ 00:06:58.666 00:06:58.666 real 0m39.413s 00:06:58.666 user 1m17.903s 00:06:58.666 sys 0m9.176s 00:06:58.666 18:27:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.666 18:27:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.666 ************************************ 00:06:58.666 END TEST event 00:06:58.666 ************************************ 00:06:58.667 18:27:45 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:58.667 18:27:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.667 18:27:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.667 18:27:45 -- common/autotest_common.sh@10 -- # set +x 00:06:58.667 ************************************ 00:06:58.667 START TEST thread 00:06:58.667 ************************************ 00:06:58.667 18:27:45 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:58.926 * Looking for test storage... 00:06:58.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.926 18:27:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.926 18:27:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.926 18:27:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.926 18:27:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.926 18:27:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.926 18:27:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.926 18:27:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.926 18:27:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.926 18:27:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.926 18:27:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.926 18:27:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.926 18:27:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:58.926 18:27:45 thread -- scripts/common.sh@345 -- # : 1 00:06:58.926 18:27:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.926 18:27:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.926 18:27:45 thread -- scripts/common.sh@365 -- # decimal 1 00:06:58.926 18:27:45 thread -- scripts/common.sh@353 -- # local d=1 00:06:58.926 18:27:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.926 18:27:45 thread -- scripts/common.sh@355 -- # echo 1 00:06:58.926 18:27:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.926 18:27:45 thread -- scripts/common.sh@366 -- # decimal 2 00:06:58.926 18:27:45 thread -- scripts/common.sh@353 -- # local d=2 00:06:58.926 18:27:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.926 18:27:45 thread -- scripts/common.sh@355 -- # echo 2 00:06:58.926 18:27:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.926 18:27:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.926 18:27:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.926 18:27:45 thread -- scripts/common.sh@368 -- # return 0 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.926 --rc genhtml_branch_coverage=1 00:06:58.926 --rc genhtml_function_coverage=1 00:06:58.926 --rc genhtml_legend=1 00:06:58.926 --rc geninfo_all_blocks=1 00:06:58.926 --rc geninfo_unexecuted_blocks=1 00:06:58.926 00:06:58.926 ' 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.926 --rc genhtml_branch_coverage=1 00:06:58.926 --rc genhtml_function_coverage=1 00:06:58.926 --rc genhtml_legend=1 00:06:58.926 --rc geninfo_all_blocks=1 00:06:58.926 --rc geninfo_unexecuted_blocks=1 00:06:58.926 00:06:58.926 ' 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.926 --rc genhtml_branch_coverage=1 00:06:58.926 --rc genhtml_function_coverage=1 00:06:58.926 --rc genhtml_legend=1 00:06:58.926 --rc geninfo_all_blocks=1 00:06:58.926 --rc geninfo_unexecuted_blocks=1 00:06:58.926 00:06:58.926 ' 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.926 --rc genhtml_branch_coverage=1 00:06:58.926 --rc genhtml_function_coverage=1 00:06:58.926 --rc genhtml_legend=1 00:06:58.926 --rc geninfo_all_blocks=1 00:06:58.926 --rc geninfo_unexecuted_blocks=1 00:06:58.926 00:06:58.926 ' 00:06:58.926 18:27:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.926 18:27:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.926 ************************************ 00:06:58.926 START TEST thread_poller_perf 00:06:58.926 ************************************ 00:06:58.926 18:27:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.926 [2024-11-17 18:27:45.415490] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:06:58.926 [2024-11-17 18:27:45.415547] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602215 ] 00:06:58.926 [2024-11-17 18:27:45.483538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.185 [2024-11-17 18:27:45.533142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.185 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:00.120 [2024-11-17T17:27:46.696Z] ====================================== 00:07:00.120 [2024-11-17T17:27:46.696Z] busy:2707622676 (cyc) 00:07:00.120 [2024-11-17T17:27:46.696Z] total_run_count: 366000 00:07:00.120 [2024-11-17T17:27:46.696Z] tsc_hz: 2700000000 (cyc) 00:07:00.120 [2024-11-17T17:27:46.696Z] ====================================== 00:07:00.120 [2024-11-17T17:27:46.696Z] poller_cost: 7397 (cyc), 2739 (nsec) 00:07:00.120 00:07:00.120 real 0m1.181s 00:07:00.120 user 0m1.108s 00:07:00.120 sys 0m0.068s 00:07:00.120 18:27:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.120 18:27:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.120 ************************************ 00:07:00.120 END TEST thread_poller_perf 00:07:00.120 ************************************ 00:07:00.120 18:27:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.120 18:27:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:00.120 18:27:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.120 18:27:46 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.120 ************************************ 00:07:00.120 START TEST thread_poller_perf 00:07:00.120 ************************************ 00:07:00.120 18:27:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.120 [2024-11-17 18:27:46.648132] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:00.120 [2024-11-17 18:27:46.648196] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602375 ] 00:07:00.379 [2024-11-17 18:27:46.717846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.379 [2024-11-17 18:27:46.764183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.379 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.313 [2024-11-17T17:27:47.889Z] ====================================== 00:07:01.313 [2024-11-17T17:27:47.889Z] busy:2702466837 (cyc) 00:07:01.313 [2024-11-17T17:27:47.889Z] total_run_count: 4851000 00:07:01.313 [2024-11-17T17:27:47.889Z] tsc_hz: 2700000000 (cyc) 00:07:01.313 [2024-11-17T17:27:47.889Z] ====================================== 00:07:01.313 [2024-11-17T17:27:47.889Z] poller_cost: 557 (cyc), 206 (nsec) 00:07:01.313 00:07:01.313 real 0m1.172s 00:07:01.313 user 0m1.100s 00:07:01.313 sys 0m0.068s 00:07:01.313 18:27:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.313 18:27:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.313 ************************************ 00:07:01.313 END TEST thread_poller_perf 00:07:01.313 ************************************ 00:07:01.313 18:27:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.313 00:07:01.313 real 0m2.603s 00:07:01.313 user 0m2.338s 00:07:01.313 sys 0m0.270s 00:07:01.313 18:27:47 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.313 18:27:47 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.313 ************************************ 00:07:01.313 END TEST thread 00:07:01.313 ************************************ 00:07:01.313 18:27:47 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:01.313 18:27:47 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.313 18:27:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.313 18:27:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.313 18:27:47 -- common/autotest_common.sh@10 -- # set +x 00:07:01.313 ************************************ 00:07:01.313 START TEST app_cmdline 00:07:01.313 ************************************ 00:07:01.313 18:27:47 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.572 * Looking for test storage... 00:07:01.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.573 18:27:47 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.573 18:27:47 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.573 18:27:47 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.573 18:27:48 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.573 --rc genhtml_branch_coverage=1 00:07:01.573 --rc genhtml_function_coverage=1 00:07:01.573 --rc genhtml_legend=1 00:07:01.573 --rc geninfo_all_blocks=1 00:07:01.573 --rc geninfo_unexecuted_blocks=1 00:07:01.573 00:07:01.573 ' 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.573 --rc genhtml_branch_coverage=1 00:07:01.573 --rc genhtml_function_coverage=1 00:07:01.573 --rc genhtml_legend=1 00:07:01.573 --rc geninfo_all_blocks=1 00:07:01.573 --rc geninfo_unexecuted_blocks=1 00:07:01.573 00:07:01.573 ' 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.573 --rc genhtml_branch_coverage=1 00:07:01.573 --rc genhtml_function_coverage=1 00:07:01.573 --rc genhtml_legend=1 00:07:01.573 --rc geninfo_all_blocks=1 00:07:01.573 --rc geninfo_unexecuted_blocks=1 00:07:01.573 00:07:01.573 ' 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.573 --rc genhtml_branch_coverage=1 00:07:01.573 --rc genhtml_function_coverage=1 00:07:01.573 --rc genhtml_legend=1 00:07:01.573 --rc geninfo_all_blocks=1 00:07:01.573 --rc geninfo_unexecuted_blocks=1 00:07:01.573 00:07:01.573 ' 00:07:01.573 18:27:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.573 18:27:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=602582 00:07:01.573 18:27:48 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.573 18:27:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 602582 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 602582 ']' 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.573 18:27:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.573 [2024-11-17 18:27:48.080647] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:01.573 [2024-11-17 18:27:48.080761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602582 ] 00:07:01.832 [2024-11-17 18:27:48.152406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.832 [2024-11-17 18:27:48.199327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.090 18:27:48 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.090 18:27:48 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:02.090 18:27:48 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:02.349 { 00:07:02.349 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:07:02.349 "fields": { 00:07:02.349 "major": 25, 00:07:02.349 "minor": 1, 00:07:02.349 "patch": 0, 00:07:02.349 "suffix": "-pre", 00:07:02.349 "commit": "83e8405e4" 00:07:02.349 } 00:07:02.349 } 00:07:02.349 18:27:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.349 18:27:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.349 18:27:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.349 18:27:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.349 18:27:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.349 18:27:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.349 18:27:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.349 18:27:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.349 18:27:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.349 18:27:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:02.349 18:27:48 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.610 request: 00:07:02.610 { 00:07:02.610 "method": "env_dpdk_get_mem_stats", 00:07:02.610 "req_id": 1 00:07:02.610 } 00:07:02.610 Got JSON-RPC error response 00:07:02.610 response: 00:07:02.610 { 00:07:02.610 "code": -32601, 00:07:02.610 "message": "Method not found" 00:07:02.610 } 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.610 18:27:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 602582 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 602582 ']' 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 602582 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602582 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602582' 00:07:02.610 killing process with pid 602582 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@973 -- # kill 602582 00:07:02.610 18:27:49 app_cmdline -- common/autotest_common.sh@978 -- # wait 602582 00:07:02.869 00:07:02.869 real 0m1.560s 00:07:02.869 user 0m1.964s 00:07:02.869 sys 0m0.475s 00:07:02.869 18:27:49 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.869 18:27:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.869 ************************************ 00:07:02.869 END TEST app_cmdline 00:07:02.869 ************************************ 00:07:03.127 18:27:49 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:03.127 18:27:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.127 18:27:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.127 18:27:49 -- common/autotest_common.sh@10 -- # set +x 00:07:03.127 ************************************ 00:07:03.127 START TEST version 00:07:03.127 ************************************ 00:07:03.127 18:27:49 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:03.127 * Looking for test storage... 00:07:03.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:03.127 18:27:49 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.127 18:27:49 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.127 18:27:49 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.127 18:27:49 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.127 18:27:49 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.127 18:27:49 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.127 18:27:49 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.127 18:27:49 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.127 18:27:49 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.127 18:27:49 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.127 18:27:49 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.127 18:27:49 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.127 18:27:49 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.127 18:27:49 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.127 18:27:49 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.127 18:27:49 version -- scripts/common.sh@344 -- # case "$op" in 00:07:03.127 18:27:49 version -- scripts/common.sh@345 -- # : 1 00:07:03.127 18:27:49 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.127 18:27:49 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.127 18:27:49 version -- scripts/common.sh@365 -- # decimal 1 00:07:03.127 18:27:49 version -- scripts/common.sh@353 -- # local d=1 00:07:03.127 18:27:49 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.127 18:27:49 version -- scripts/common.sh@355 -- # echo 1 00:07:03.127 18:27:49 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.127 18:27:49 version -- scripts/common.sh@366 -- # decimal 2 00:07:03.127 18:27:49 version -- scripts/common.sh@353 -- # local d=2 00:07:03.127 18:27:49 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.127 18:27:49 version -- scripts/common.sh@355 -- # echo 2 00:07:03.127 18:27:49 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.127 18:27:49 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.127 18:27:49 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.127 18:27:49 version -- scripts/common.sh@368 -- # return 0 00:07:03.127 18:27:49 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.127 18:27:49 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.127 --rc genhtml_branch_coverage=1 00:07:03.127 --rc genhtml_function_coverage=1 00:07:03.127 --rc genhtml_legend=1 00:07:03.127 --rc geninfo_all_blocks=1 00:07:03.127 --rc geninfo_unexecuted_blocks=1 00:07:03.127 00:07:03.127 ' 00:07:03.127 18:27:49 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.127 --rc genhtml_branch_coverage=1 00:07:03.127 --rc genhtml_function_coverage=1 00:07:03.127 --rc genhtml_legend=1 00:07:03.127 --rc geninfo_all_blocks=1 00:07:03.127 --rc geninfo_unexecuted_blocks=1 00:07:03.127 00:07:03.127 ' 00:07:03.127 18:27:49 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.127 --rc genhtml_branch_coverage=1 00:07:03.127 --rc genhtml_function_coverage=1 00:07:03.127 --rc genhtml_legend=1 00:07:03.127 --rc geninfo_all_blocks=1 00:07:03.127 --rc geninfo_unexecuted_blocks=1 00:07:03.127 00:07:03.127 ' 00:07:03.127 18:27:49 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.127 --rc genhtml_branch_coverage=1 00:07:03.127 --rc genhtml_function_coverage=1 00:07:03.127 --rc genhtml_legend=1 00:07:03.127 --rc geninfo_all_blocks=1 00:07:03.127 --rc geninfo_unexecuted_blocks=1 00:07:03.127 00:07:03.127 ' 00:07:03.127 18:27:49 version -- app/version.sh@17 -- # get_header_version major 00:07:03.127 18:27:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.127 18:27:49 version -- app/version.sh@14 -- # cut -f2 00:07:03.127 18:27:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.127 18:27:49 version -- app/version.sh@17 -- # major=25 00:07:03.127 18:27:49 version -- app/version.sh@18 -- # get_header_version minor 00:07:03.127 18:27:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.127 18:27:49 version -- app/version.sh@14 -- # cut -f2 00:07:03.127 18:27:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.127 18:27:49 version -- app/version.sh@18 -- # minor=1 00:07:03.127 18:27:49 version -- app/version.sh@19 -- # get_header_version patch 00:07:03.127 18:27:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.127 18:27:49 version -- app/version.sh@14 -- # cut -f2 00:07:03.127 18:27:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.127 18:27:49 version -- app/version.sh@19 -- # patch=0 00:07:03.127 18:27:49 version -- app/version.sh@20 -- # get_header_version suffix 00:07:03.127 18:27:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.127 18:27:49 version -- app/version.sh@14 -- # cut -f2 00:07:03.127 18:27:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.127 18:27:49 version -- app/version.sh@20 -- # suffix=-pre 00:07:03.127 18:27:49 version -- app/version.sh@22 -- # version=25.1 00:07:03.127 18:27:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:03.127 18:27:49 version -- app/version.sh@28 -- # version=25.1rc0 00:07:03.127 18:27:49 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:03.127 18:27:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:03.127 18:27:49 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:03.127 18:27:49 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:03.127 00:07:03.127 real 0m0.197s 00:07:03.127 user 0m0.131s 00:07:03.127 sys 0m0.092s 00:07:03.127 18:27:49 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.127 18:27:49 version -- common/autotest_common.sh@10 -- # set +x 00:07:03.127 ************************************ 00:07:03.127 END TEST version 00:07:03.127 ************************************ 00:07:03.386 18:27:49 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:03.386 18:27:49 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:03.386 18:27:49 -- spdk/autotest.sh@194 -- # uname -s 00:07:03.386 18:27:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:03.386 18:27:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:03.386 18:27:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:03.386 18:27:49 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:03.386 18:27:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:03.386 18:27:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:03.386 18:27:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:03.386 18:27:49 -- common/autotest_common.sh@10 -- # set +x 00:07:03.386 18:27:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:03.386 18:27:49 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:03.386 18:27:49 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:03.386 18:27:49 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:03.386 18:27:49 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:03.386 18:27:49 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:03.386 18:27:49 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:03.386 18:27:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.386 18:27:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.386 18:27:49 -- common/autotest_common.sh@10 -- # set +x 00:07:03.386 ************************************ 00:07:03.386 START TEST nvmf_tcp 00:07:03.386 ************************************ 00:07:03.386 18:27:49 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:03.386 * Looking for test storage... 00:07:03.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:03.386 18:27:49 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.386 18:27:49 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.386 18:27:49 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.386 18:27:49 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.386 18:27:49 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.386 18:27:49 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.386 18:27:49 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.387 18:27:49 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:03.387 18:27:49 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.387 18:27:49 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.387 --rc genhtml_branch_coverage=1 00:07:03.387 --rc genhtml_function_coverage=1 00:07:03.387 --rc genhtml_legend=1 00:07:03.387 --rc geninfo_all_blocks=1 00:07:03.387 --rc geninfo_unexecuted_blocks=1 00:07:03.387 00:07:03.387 ' 00:07:03.387 18:27:49 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.387 --rc genhtml_branch_coverage=1 00:07:03.387 --rc genhtml_function_coverage=1 00:07:03.387 --rc genhtml_legend=1 00:07:03.387 --rc geninfo_all_blocks=1 00:07:03.387 --rc geninfo_unexecuted_blocks=1 00:07:03.387 00:07:03.387 ' 00:07:03.387 18:27:49 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.387 --rc genhtml_branch_coverage=1 00:07:03.387 --rc genhtml_function_coverage=1 00:07:03.387 --rc genhtml_legend=1 00:07:03.387 --rc geninfo_all_blocks=1 00:07:03.387 --rc geninfo_unexecuted_blocks=1 00:07:03.387 00:07:03.387 ' 00:07:03.387 18:27:49 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.387 --rc genhtml_branch_coverage=1 00:07:03.387 --rc genhtml_function_coverage=1 00:07:03.387 --rc genhtml_legend=1 00:07:03.387 --rc geninfo_all_blocks=1 00:07:03.387 --rc geninfo_unexecuted_blocks=1 00:07:03.387 00:07:03.387 ' 00:07:03.387 18:27:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:03.387 18:27:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:03.387 18:27:49 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:03.387 18:27:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.387 18:27:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.387 18:27:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:03.387 ************************************ 00:07:03.387 START TEST nvmf_target_core 00:07:03.387 ************************************ 00:07:03.387 18:27:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:03.646 * Looking for test storage... 00:07:03.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:03.646 18:27:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.646 18:27:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.646 18:27:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:03.646 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.647 --rc genhtml_branch_coverage=1 00:07:03.647 --rc genhtml_function_coverage=1 00:07:03.647 --rc genhtml_legend=1 00:07:03.647 --rc geninfo_all_blocks=1 00:07:03.647 --rc geninfo_unexecuted_blocks=1 00:07:03.647 00:07:03.647 ' 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.647 --rc genhtml_branch_coverage=1 00:07:03.647 --rc genhtml_function_coverage=1 00:07:03.647 --rc genhtml_legend=1 00:07:03.647 --rc geninfo_all_blocks=1 00:07:03.647 --rc geninfo_unexecuted_blocks=1 00:07:03.647 00:07:03.647 ' 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.647 --rc genhtml_branch_coverage=1 00:07:03.647 --rc genhtml_function_coverage=1 00:07:03.647 --rc genhtml_legend=1 00:07:03.647 --rc geninfo_all_blocks=1 00:07:03.647 --rc geninfo_unexecuted_blocks=1 00:07:03.647 00:07:03.647 ' 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.647 --rc genhtml_branch_coverage=1 00:07:03.647 --rc genhtml_function_coverage=1 00:07:03.647 --rc genhtml_legend=1 00:07:03.647 --rc geninfo_all_blocks=1 00:07:03.647 --rc geninfo_unexecuted_blocks=1 00:07:03.647 00:07:03.647 ' 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:03.647 ************************************ 00:07:03.647 START TEST nvmf_abort 00:07:03.647 ************************************ 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:03.647 * Looking for test storage... 00:07:03.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.647 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.907 --rc genhtml_branch_coverage=1 00:07:03.907 --rc genhtml_function_coverage=1 00:07:03.907 --rc genhtml_legend=1 00:07:03.907 --rc geninfo_all_blocks=1 00:07:03.907 --rc geninfo_unexecuted_blocks=1 00:07:03.907 00:07:03.907 ' 00:07:03.907 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.908 --rc genhtml_branch_coverage=1 00:07:03.908 --rc genhtml_function_coverage=1 00:07:03.908 --rc genhtml_legend=1 00:07:03.908 --rc geninfo_all_blocks=1 00:07:03.908 --rc geninfo_unexecuted_blocks=1 00:07:03.908 00:07:03.908 ' 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.908 --rc genhtml_branch_coverage=1 00:07:03.908 --rc genhtml_function_coverage=1 00:07:03.908 --rc genhtml_legend=1 00:07:03.908 --rc geninfo_all_blocks=1 00:07:03.908 --rc geninfo_unexecuted_blocks=1 00:07:03.908 00:07:03.908 ' 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.908 --rc genhtml_branch_coverage=1 00:07:03.908 --rc genhtml_function_coverage=1 00:07:03.908 --rc genhtml_legend=1 00:07:03.908 --rc geninfo_all_blocks=1 00:07:03.908 --rc geninfo_unexecuted_blocks=1 00:07:03.908 00:07:03.908 ' 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:03.908 18:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:06.442 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:06.442 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:06.442 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:06.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.442 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:06.443 00:07:06.443 --- 10.0.0.2 ping statistics --- 00:07:06.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.443 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:07:06.443 00:07:06.443 --- 10.0.0.1 ping statistics --- 00:07:06.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.443 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=604782 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 604782 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 604782 ']' 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.443 18:27:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.443 [2024-11-17 18:27:52.793261] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:06.443 [2024-11-17 18:27:52.793357] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.443 [2024-11-17 18:27:52.868327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.443 [2024-11-17 18:27:52.918342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.443 [2024-11-17 18:27:52.918397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.443 [2024-11-17 18:27:52.918424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.443 [2024-11-17 18:27:52.918436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.443 [2024-11-17 18:27:52.918445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.443 [2024-11-17 18:27:52.920230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.443 [2024-11-17 18:27:52.920294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.443 [2024-11-17 18:27:52.920297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.702 [2024-11-17 18:27:53.068785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.702 Malloc0 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.702 Delay0 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.702 [2024-11-17 18:27:53.135758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.702 18:27:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:06.702 [2024-11-17 18:27:53.250577] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:09.234 Initializing NVMe Controllers 00:07:09.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:09.234 controller IO queue size 128 less than required 00:07:09.234 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:09.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:09.234 Initialization complete. Launching workers. 00:07:09.234 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29023 00:07:09.234 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29084, failed to submit 62 00:07:09.234 success 29027, unsuccessful 57, failed 0 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:09.234 rmmod nvme_tcp 00:07:09.234 rmmod nvme_fabrics 00:07:09.234 rmmod nvme_keyring 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 604782 ']' 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 604782 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 604782 ']' 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 604782 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 604782 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 604782' 00:07:09.234 killing process with pid 604782 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 604782 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 604782 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.234 18:27:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.771 00:07:11.771 real 0m7.659s 00:07:11.771 user 0m11.064s 00:07:11.771 sys 0m2.683s 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.771 ************************************ 00:07:11.771 END TEST nvmf_abort 00:07:11.771 ************************************ 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.771 ************************************ 00:07:11.771 START TEST nvmf_ns_hotplug_stress 00:07:11.771 ************************************ 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:11.771 * Looking for test storage... 00:07:11.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.771 --rc genhtml_branch_coverage=1 00:07:11.771 --rc genhtml_function_coverage=1 00:07:11.771 --rc genhtml_legend=1 00:07:11.771 --rc geninfo_all_blocks=1 00:07:11.771 --rc geninfo_unexecuted_blocks=1 00:07:11.771 00:07:11.771 ' 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.771 --rc genhtml_branch_coverage=1 00:07:11.771 --rc genhtml_function_coverage=1 00:07:11.771 --rc genhtml_legend=1 00:07:11.771 --rc geninfo_all_blocks=1 00:07:11.771 --rc geninfo_unexecuted_blocks=1 00:07:11.771 00:07:11.771 ' 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.771 --rc genhtml_branch_coverage=1 00:07:11.771 --rc genhtml_function_coverage=1 00:07:11.771 --rc genhtml_legend=1 00:07:11.771 --rc geninfo_all_blocks=1 00:07:11.771 --rc geninfo_unexecuted_blocks=1 00:07:11.771 00:07:11.771 ' 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.771 --rc genhtml_branch_coverage=1 00:07:11.771 --rc genhtml_function_coverage=1 00:07:11.771 --rc genhtml_legend=1 00:07:11.771 --rc geninfo_all_blocks=1 00:07:11.771 --rc geninfo_unexecuted_blocks=1 00:07:11.771 00:07:11.771 ' 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.771 18:27:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.771 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.771 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.771 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.771 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.771 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.772 18:27:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:13.678 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:13.678 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.678 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:13.679 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:13.679 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:13.679 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.937 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.937 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.937 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:13.937 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:13.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:07:13.937 00:07:13.937 --- 10.0.0.2 ping statistics --- 00:07:13.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.937 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:07:13.937 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:07:13.938 00:07:13.938 --- 10.0.0.1 ping statistics --- 00:07:13.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.938 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=607029 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 607029 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 607029 ']' 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.938 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.938 [2024-11-17 18:28:00.368156] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:07:13.938 [2024-11-17 18:28:00.368225] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.938 [2024-11-17 18:28:00.441906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.938 [2024-11-17 18:28:00.492008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.938 [2024-11-17 18:28:00.492064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.938 [2024-11-17 18:28:00.492078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.938 [2024-11-17 18:28:00.492089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.938 [2024-11-17 18:28:00.492099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.938 [2024-11-17 18:28:00.493717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.938 [2024-11-17 18:28:00.493752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.938 [2024-11-17 18:28:00.493756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.196 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.196 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:14.196 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:14.196 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.196 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:14.196 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.196 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:14.196 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:14.454 [2024-11-17 18:28:00.895746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.454 18:28:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:14.712 18:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.970 [2024-11-17 18:28:01.426549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.970 18:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:15.227 18:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:15.484 Malloc0 00:07:15.485 18:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:15.742 Delay0 00:07:15.742 18:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.000 18:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:16.257 NULL1 00:07:16.257 18:28:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:16.823 18:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=607448 00:07:16.824 18:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:16.824 18:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:16.824 18:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.824 18:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.081 18:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:17.081 18:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:17.339 true 00:07:17.339 18:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:17.339 18:28:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.905 18:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.905 18:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:17.905 18:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:18.163 true 00:07:18.163 18:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:18.163 18:28:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.729 18:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.729 18:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:18.729 18:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:18.987 true 00:07:18.987 18:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:18.987 18:28:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.359 Read completed with error (sct=0, sc=11) 00:07:20.359 18:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.359 18:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:20.359 18:28:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:20.617 true 00:07:20.617 18:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:20.617 18:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.875 18:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.133 18:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:21.133 18:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:21.390 true 00:07:21.390 18:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:21.390 18:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.647 18:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.905 18:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:21.905 18:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:22.163 true 00:07:22.163 18:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:22.163 18:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.094 18:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.352 18:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:23.352 18:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:23.609 true 00:07:23.867 18:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:23.867 18:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.124 18:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.382 18:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:24.382 18:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:24.640 true 00:07:24.640 18:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:24.640 18:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.572 18:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.573 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:25.573 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:25.830 true 00:07:25.830 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:25.830 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.088 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.345 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:26.345 18:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:26.603 true 00:07:26.861 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:26.861 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.119 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.376 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:27.376 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:27.376 true 00:07:27.634 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:27.634 18:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.568 18:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.826 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:28.826 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:29.084 true 00:07:29.084 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:29.084 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.342 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.600 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:29.600 18:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:29.858 true 00:07:29.858 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:29.858 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.116 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.374 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:30.374 18:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:30.632 true 00:07:30.632 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:30.632 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.565 18:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.822 18:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:31.822 18:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:32.080 true 00:07:32.080 18:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:32.080 18:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.338 18:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.595 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:32.595 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:32.852 true 00:07:32.852 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:32.852 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.125 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.450 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:33.450 18:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:33.776 true 00:07:33.776 18:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:33.776 18:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.737 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.994 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:34.994 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:35.252 true 00:07:35.252 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:35.252 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.510 18:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.767 18:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:35.767 18:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:36.024 true 00:07:36.024 18:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:36.024 18:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.282 18:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.540 18:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:36.540 18:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:36.797 true 00:07:36.797 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:36.797 18:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.730 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.988 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:37.988 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:38.245 true 00:07:38.245 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:38.245 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.502 18:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.760 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:38.760 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:39.017 true 00:07:39.017 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:39.017 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.274 18:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.531 18:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:39.532 18:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:39.789 true 00:07:39.789 18:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:39.789 18:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.721 18:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.978 18:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:40.978 18:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:41.235 true 00:07:41.235 18:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:41.235 18:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.492 18:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.749 18:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:41.749 18:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:42.007 true 00:07:42.007 18:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:42.007 18:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.939 18:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.939 18:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:42.939 18:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:43.503 true 00:07:43.503 18:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:43.503 18:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.503 18:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.067 18:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:44.067 18:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:44.067 true 00:07:44.067 18:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:44.067 18:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.325 18:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.583 18:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:44.583 18:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:44.841 true 00:07:44.841 18:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:44.841 18:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.212 18:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.212 18:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:46.212 18:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:46.470 true 00:07:46.470 18:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:46.470 18:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.728 18:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.985 18:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:46.985 18:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:46.985 Initializing NVMe Controllers 00:07:46.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:46.986 Controller IO queue size 128, less than required. 00:07:46.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:46.986 Controller IO queue size 128, less than required. 00:07:46.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:46.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:46.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:46.986 Initialization complete. Launching workers. 00:07:46.986 ======================================================== 00:07:46.986 Latency(us) 00:07:46.986 Device Information : IOPS MiB/s Average min max 00:07:46.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 208.07 0.10 223856.97 2855.91 1054962.61 00:07:46.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7674.97 3.75 16629.51 3357.73 538338.72 00:07:46.986 ======================================================== 00:07:46.986 Total : 7883.05 3.85 22099.24 2855.91 1054962.61 00:07:46.986 00:07:47.243 true 00:07:47.501 18:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 607448 00:07:47.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (607448) - No such process 00:07:47.501 18:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 607448 00:07:47.501 18:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.759 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:48.016 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:48.017 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:48.017 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:48.017 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.017 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:48.274 null0 00:07:48.274 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.274 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.274 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:48.532 null1 00:07:48.532 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.532 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.532 18:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:48.790 null2 00:07:48.790 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.790 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.790 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:49.048 null3 00:07:49.048 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.048 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.048 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:49.306 null4 00:07:49.306 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.306 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.306 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:49.564 null5 00:07:49.564 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.564 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.564 18:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:49.822 null6 00:07:49.822 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.822 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.822 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:50.080 null7 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.080 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 611523 611524 611526 611528 611530 611532 611534 611536 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.081 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:50.339 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:50.339 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:50.339 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.339 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:50.339 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:50.339 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.339 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:50.339 18:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.598 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.856 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:50.856 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:50.856 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:50.856 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.114 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.114 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.114 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.114 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.373 18:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.632 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.632 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.632 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.632 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.632 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.632 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.632 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.632 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.890 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.891 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.891 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.149 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.149 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.149 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.149 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.149 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.149 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.149 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.149 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.407 18:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.665 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.665 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.665 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.665 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.665 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.665 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.665 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.923 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.181 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.439 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.439 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.439 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.439 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.439 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.439 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.439 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.439 18:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.698 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.956 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.956 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.956 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.956 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.956 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.956 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.956 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.956 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.214 18:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.472 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.472 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.472 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.473 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.473 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.473 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.473 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.731 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.989 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.248 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.248 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.248 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.248 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.248 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.248 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.248 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.248 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.506 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.507 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.507 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.507 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.507 18:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.765 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.765 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.765 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.765 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.765 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.765 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.765 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.765 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.024 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.024 rmmod nvme_tcp 00:07:56.024 rmmod nvme_fabrics 00:07:56.024 rmmod nvme_keyring 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 607029 ']' 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 607029 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 607029 ']' 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 607029 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 607029 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 607029' 00:07:56.283 killing process with pid 607029 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 607029 00:07:56.283 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 607029 00:07:56.542 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.542 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.542 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.543 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:56.543 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:56.543 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.543 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.543 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.543 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:56.543 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.543 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.543 18:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.446 18:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.446 00:07:58.446 real 0m47.068s 00:07:58.446 user 3m39.988s 00:07:58.446 sys 0m15.703s 00:07:58.446 18:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.446 18:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:58.446 ************************************ 00:07:58.446 END TEST nvmf_ns_hotplug_stress 00:07:58.446 ************************************ 00:07:58.446 18:28:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:58.446 18:28:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.446 18:28:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.446 18:28:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.446 ************************************ 00:07:58.446 START TEST nvmf_delete_subsystem 00:07:58.446 ************************************ 00:07:58.446 18:28:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:58.446 * Looking for test storage... 00:07:58.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:58.705 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.706 --rc genhtml_branch_coverage=1 00:07:58.706 --rc genhtml_function_coverage=1 00:07:58.706 --rc genhtml_legend=1 00:07:58.706 --rc geninfo_all_blocks=1 00:07:58.706 --rc geninfo_unexecuted_blocks=1 00:07:58.706 00:07:58.706 ' 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.706 --rc genhtml_branch_coverage=1 00:07:58.706 --rc genhtml_function_coverage=1 00:07:58.706 --rc genhtml_legend=1 00:07:58.706 --rc geninfo_all_blocks=1 00:07:58.706 --rc geninfo_unexecuted_blocks=1 00:07:58.706 00:07:58.706 ' 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.706 --rc genhtml_branch_coverage=1 00:07:58.706 --rc genhtml_function_coverage=1 00:07:58.706 --rc genhtml_legend=1 00:07:58.706 --rc geninfo_all_blocks=1 00:07:58.706 --rc geninfo_unexecuted_blocks=1 00:07:58.706 00:07:58.706 ' 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:58.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.706 --rc genhtml_branch_coverage=1 00:07:58.706 --rc genhtml_function_coverage=1 00:07:58.706 --rc genhtml_legend=1 00:07:58.706 --rc geninfo_all_blocks=1 00:07:58.706 --rc geninfo_unexecuted_blocks=1 00:07:58.706 00:07:58.706 ' 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.706 18:28:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.243 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:01.244 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:01.244 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:01.244 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:01.244 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:01.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:08:01.244 00:08:01.244 --- 10.0.0.2 ping statistics --- 00:08:01.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.244 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:08:01.244 00:08:01.244 --- 10.0.0.1 ping statistics --- 00:08:01.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.244 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=614302 00:08:01.244 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 614302 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 614302 ']' 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.245 [2024-11-17 18:28:47.468338] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:01.245 [2024-11-17 18:28:47.468420] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.245 [2024-11-17 18:28:47.544215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:01.245 [2024-11-17 18:28:47.593102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.245 [2024-11-17 18:28:47.593148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.245 [2024-11-17 18:28:47.593170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.245 [2024-11-17 18:28:47.593188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.245 [2024-11-17 18:28:47.593203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.245 [2024-11-17 18:28:47.594622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.245 [2024-11-17 18:28:47.594636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.245 [2024-11-17 18:28:47.736983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.245 [2024-11-17 18:28:47.753205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.245 NULL1 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.245 Delay0 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=614448 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:01.245 18:28:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:01.503 [2024-11-17 18:28:47.838217] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:03.399 18:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:03.399 18:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.399 18:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 [2024-11-17 18:28:49.960537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0704000c40 is same with the state(6) to be set 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 starting I/O failed: -6 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.399 Write completed with error (sct=0, sc=8) 00:08:03.399 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 starting I/O failed: -6 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 starting I/O failed: -6 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 starting I/O failed: -6 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 starting I/O failed: -6 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 starting I/O failed: -6 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 [2024-11-17 18:28:49.961296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c510 is same with the state(6) to be set 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Write completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:03.400 Read completed with error (sct=0, sc=8) 00:08:04.773 [2024-11-17 18:28:50.933608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a190 is same with the state(6) to be set 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 [2024-11-17 18:28:50.962798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f070400d7e0 is same with the state(6) to be set 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 [2024-11-17 18:28:50.962990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243c330 is same with the state(6) to be set 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 [2024-11-17 18:28:50.963210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243bf70 is same with the state(6) to be set 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Write completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.773 Read completed with error (sct=0, sc=8) 00:08:04.774 Write completed with error (sct=0, sc=8) 00:08:04.774 Read completed with error (sct=0, sc=8) 00:08:04.774 Read completed with error (sct=0, sc=8) 00:08:04.774 Read completed with error (sct=0, sc=8) 00:08:04.774 Read completed with error (sct=0, sc=8) 00:08:04.774 Read completed with error (sct=0, sc=8) 00:08:04.774 Read completed with error (sct=0, sc=8) 00:08:04.774 Read completed with error (sct=0, sc=8) 00:08:04.774 Read completed with error (sct=0, sc=8) 00:08:04.774 Write completed with error (sct=0, sc=8) 00:08:04.774 Read completed with error (sct=0, sc=8) 00:08:04.774 Read completed with error (sct=0, sc=8) 00:08:04.774 Write completed with error (sct=0, sc=8) 00:08:04.774 Read completed with error (sct=0, sc=8) 00:08:04.774 Write completed with error (sct=0, sc=8) 00:08:04.774 [2024-11-17 18:28:50.963397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f070400d020 is same with the state(6) to be set 00:08:04.774 Initializing NVMe Controllers 00:08:04.774 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:04.774 Controller IO queue size 128, less than required. 00:08:04.774 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:04.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:04.774 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:04.774 Initialization complete. Launching workers. 00:08:04.774 ======================================================== 00:08:04.774 Latency(us) 00:08:04.774 Device Information : IOPS MiB/s Average min max 00:08:04.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.23 0.08 904513.77 408.61 1013191.12 00:08:04.774 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.72 0.08 901684.32 671.39 1012879.86 00:08:04.774 ======================================================== 00:08:04.774 Total : 332.95 0.16 903096.94 408.61 1013191.12 00:08:04.774 00:08:04.774 [2024-11-17 18:28:50.964502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243a190 (9): Bad file descriptor 00:08:04.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:04.774 18:28:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.774 18:28:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:04.774 18:28:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 614448 00:08:04.774 18:28:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 614448 00:08:05.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (614448) - No such process 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 614448 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 614448 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 614448 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.032 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.033 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.033 [2024-11-17 18:28:51.489163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.033 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.033 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.033 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.033 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.033 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.033 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=614850 00:08:05.033 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:05.033 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 614850 00:08:05.033 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:05.033 18:28:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:05.033 [2024-11-17 18:28:51.560889] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:05.598 18:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:05.598 18:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 614850 00:08:05.598 18:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.163 18:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.163 18:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 614850 00:08:06.163 18:28:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.728 18:28:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.728 18:28:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 614850 00:08:06.728 18:28:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.986 18:28:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.986 18:28:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 614850 00:08:06.986 18:28:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.611 18:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:07.611 18:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 614850 00:08:07.611 18:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:08.281 18:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.281 18:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 614850 00:08:08.281 18:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:08.281 Initializing NVMe Controllers 00:08:08.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:08.281 Controller IO queue size 128, less than required. 00:08:08.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:08.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:08.281 Initialization complete. Launching workers. 00:08:08.281 ======================================================== 00:08:08.281 Latency(us) 00:08:08.281 Device Information : IOPS MiB/s Average min max 00:08:08.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004022.95 1000160.78 1042532.42 00:08:08.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004258.58 1000170.03 1011121.21 00:08:08.281 ======================================================== 00:08:08.281 Total : 256.00 0.12 1004140.77 1000160.78 1042532.42 00:08:08.281 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 614850 00:08:08.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (614850) - No such process 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 614850 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:08.540 rmmod nvme_tcp 00:08:08.540 rmmod nvme_fabrics 00:08:08.540 rmmod nvme_keyring 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 614302 ']' 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 614302 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 614302 ']' 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 614302 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.540 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 614302 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 614302' 00:08:08.800 killing process with pid 614302 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 614302 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 614302 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.800 18:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:11.345 00:08:11.345 real 0m12.385s 00:08:11.345 user 0m27.724s 00:08:11.345 sys 0m3.076s 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.345 ************************************ 00:08:11.345 END TEST nvmf_delete_subsystem 00:08:11.345 ************************************ 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.345 ************************************ 00:08:11.345 START TEST nvmf_host_management 00:08:11.345 ************************************ 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:11.345 * Looking for test storage... 00:08:11.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.345 --rc genhtml_branch_coverage=1 00:08:11.345 --rc genhtml_function_coverage=1 00:08:11.345 --rc genhtml_legend=1 00:08:11.345 --rc geninfo_all_blocks=1 00:08:11.345 --rc geninfo_unexecuted_blocks=1 00:08:11.345 00:08:11.345 ' 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.345 --rc genhtml_branch_coverage=1 00:08:11.345 --rc genhtml_function_coverage=1 00:08:11.345 --rc genhtml_legend=1 00:08:11.345 --rc geninfo_all_blocks=1 00:08:11.345 --rc geninfo_unexecuted_blocks=1 00:08:11.345 00:08:11.345 ' 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.345 --rc genhtml_branch_coverage=1 00:08:11.345 --rc genhtml_function_coverage=1 00:08:11.345 --rc genhtml_legend=1 00:08:11.345 --rc geninfo_all_blocks=1 00:08:11.345 --rc geninfo_unexecuted_blocks=1 00:08:11.345 00:08:11.345 ' 00:08:11.345 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.345 --rc genhtml_branch_coverage=1 00:08:11.345 --rc genhtml_function_coverage=1 00:08:11.345 --rc genhtml_legend=1 00:08:11.345 --rc geninfo_all_blocks=1 00:08:11.345 --rc geninfo_unexecuted_blocks=1 00:08:11.346 00:08:11.346 ' 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:11.346 18:28:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:13.269 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:13.269 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:13.269 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:13.269 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.269 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:13.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:08:13.270 00:08:13.270 --- 10.0.0.2 ping statistics --- 00:08:13.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.270 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:08:13.270 00:08:13.270 --- 10.0.0.1 ping statistics --- 00:08:13.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.270 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=617221 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 617221 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 617221 ']' 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.270 18:28:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.530 [2024-11-17 18:28:59.878894] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:13.530 [2024-11-17 18:28:59.878971] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.530 [2024-11-17 18:28:59.952416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.530 [2024-11-17 18:29:00.003406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.530 [2024-11-17 18:29:00.003464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.530 [2024-11-17 18:29:00.003480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.530 [2024-11-17 18:29:00.003493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.530 [2024-11-17 18:29:00.003504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.530 [2024-11-17 18:29:00.005053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.530 [2024-11-17 18:29:00.005105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.530 [2024-11-17 18:29:00.005156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:13.530 [2024-11-17 18:29:00.005158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.789 [2024-11-17 18:29:00.152611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.789 Malloc0 00:08:13.789 [2024-11-17 18:29:00.225004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=617382 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 617382 /var/tmp/bdevperf.sock 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 617382 ']' 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.789 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:13.790 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:13.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:13.790 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:13.790 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.790 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:13.790 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.790 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:13.790 { 00:08:13.790 "params": { 00:08:13.790 "name": "Nvme$subsystem", 00:08:13.790 "trtype": "$TEST_TRANSPORT", 00:08:13.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.790 "adrfam": "ipv4", 00:08:13.790 "trsvcid": "$NVMF_PORT", 00:08:13.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.790 "hdgst": ${hdgst:-false}, 00:08:13.790 "ddgst": ${ddgst:-false} 00:08:13.790 }, 00:08:13.790 "method": "bdev_nvme_attach_controller" 00:08:13.790 } 00:08:13.790 EOF 00:08:13.790 )") 00:08:13.790 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:13.790 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:13.790 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:13.790 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:13.790 "params": { 00:08:13.790 "name": "Nvme0", 00:08:13.790 "trtype": "tcp", 00:08:13.790 "traddr": "10.0.0.2", 00:08:13.790 "adrfam": "ipv4", 00:08:13.790 "trsvcid": "4420", 00:08:13.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.790 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:13.790 "hdgst": false, 00:08:13.790 "ddgst": false 00:08:13.790 }, 00:08:13.790 "method": "bdev_nvme_attach_controller" 00:08:13.790 }' 00:08:13.790 [2024-11-17 18:29:00.307242] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:13.790 [2024-11-17 18:29:00.307319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617382 ] 00:08:14.049 [2024-11-17 18:29:00.380438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.049 [2024-11-17 18:29:00.427641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.308 Running I/O for 10 seconds... 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:14.308 18:29:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.569 [2024-11-17 18:29:01.047820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.047884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.047900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.047913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.047926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.047938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.047950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.047962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.047974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.047994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.048006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.048017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.048029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.048041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.048061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.048073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.048085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.048097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.048109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.048131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.048144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22665b0 is same with the state(6) to be set 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.569 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.569 [2024-11-17 18:29:01.055309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.569 [2024-11-17 18:29:01.055353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.569 [2024-11-17 18:29:01.055382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.569 [2024-11-17 18:29:01.055408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.569 [2024-11-17 18:29:01.055432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.569 [2024-11-17 18:29:01.055456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.569 [2024-11-17 18:29:01.055482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.569 [2024-11-17 18:29:01.055504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.569 [2024-11-17 18:29:01.055530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc6970 is same with the state(6) to be set 00:08:14.569 [2024-11-17 18:29:01.055980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.569 [2024-11-17 18:29:01.056008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.569 [2024-11-17 18:29:01.056047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.569 [2024-11-17 18:29:01.056091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.569 [2024-11-17 18:29:01.056119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.569 [2024-11-17 18:29:01.056145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.569 [2024-11-17 18:29:01.056170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.569 [2024-11-17 18:29:01.056195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.569 [2024-11-17 18:29:01.056220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.569 [2024-11-17 18:29:01.056246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.569 [2024-11-17 18:29:01.056271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.569 [2024-11-17 18:29:01.056303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.569 [2024-11-17 18:29:01.056331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.569 [2024-11-17 18:29:01.056357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.569 [2024-11-17 18:29:01.056383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.569 [2024-11-17 18:29:01.056408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.056433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.056459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.056485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.056511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.056538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.056562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.056588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.056612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.056639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.056685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.056716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.056751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.056780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.056806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.056835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.056859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.056888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.056912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.056941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.056980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.057945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.057984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.058012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.058035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.058063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.058087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.058115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.058139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.058165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.058190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.058217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.058242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.058269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.058294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.058320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.570 [2024-11-17 18:29:01.058346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.570 [2024-11-17 18:29:01.058377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.058402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.058427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.058453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.058478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.058504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.058530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.058555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.058588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.058615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.058641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.058690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.058720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.058755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.058782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.058808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.058835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.058860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.058887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.058911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.058940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.058980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.059008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.059032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.059064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.059094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.059122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.059146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.059174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.059198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.059225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.059250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.059278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.059302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.059331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.059355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.059382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.059406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.059434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.059457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 [2024-11-17 18:29:01.059490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.571 [2024-11-17 18:29:01.059515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.571 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.571 18:29:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:14.571 [2024-11-17 18:29:01.061015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:14.571 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:14.571 00:08:14.571 Latency(us) 00:08:14.571 [2024-11-17T17:29:01.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.571 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:14.571 Job: Nvme0n1 ended in about 0.41 seconds with error 00:08:14.571 Verification LBA range: start 0x0 length 0x400 00:08:14.571 Nvme0n1 : 0.41 1563.60 97.72 156.36 0.00 36160.16 4781.70 34952.53 00:08:14.571 [2024-11-17T17:29:01.147Z] =================================================================================================================== 00:08:14.571 [2024-11-17T17:29:01.147Z] Total : 1563.60 97.72 156.36 0.00 36160.16 4781.70 34952.53 00:08:14.571 [2024-11-17 18:29:01.063141] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.571 [2024-11-17 18:29:01.063175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6970 (9): Bad file descriptor 00:08:14.830 [2024-11-17 18:29:01.154798] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:15.765 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 617382 00:08:15.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (617382) - No such process 00:08:15.765 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:15.765 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:15.766 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:15.766 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:15.766 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:15.766 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.766 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.766 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.766 { 00:08:15.766 "params": { 00:08:15.766 "name": "Nvme$subsystem", 00:08:15.766 "trtype": "$TEST_TRANSPORT", 00:08:15.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.766 "adrfam": "ipv4", 00:08:15.766 "trsvcid": "$NVMF_PORT", 00:08:15.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.766 "hdgst": ${hdgst:-false}, 00:08:15.766 "ddgst": ${ddgst:-false} 00:08:15.766 }, 00:08:15.766 "method": "bdev_nvme_attach_controller" 00:08:15.766 } 00:08:15.766 EOF 00:08:15.766 )") 00:08:15.766 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:15.766 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:15.766 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:15.766 18:29:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.766 "params": { 00:08:15.766 "name": "Nvme0", 00:08:15.766 "trtype": "tcp", 00:08:15.766 "traddr": "10.0.0.2", 00:08:15.766 "adrfam": "ipv4", 00:08:15.766 "trsvcid": "4420", 00:08:15.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:15.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:15.766 "hdgst": false, 00:08:15.766 "ddgst": false 00:08:15.766 }, 00:08:15.766 "method": "bdev_nvme_attach_controller" 00:08:15.766 }' 00:08:15.766 [2024-11-17 18:29:02.111912] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:15.766 [2024-11-17 18:29:02.112022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617544 ] 00:08:15.766 [2024-11-17 18:29:02.182114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.766 [2024-11-17 18:29:02.229351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.025 Running I/O for 1 seconds... 00:08:16.963 1664.00 IOPS, 104.00 MiB/s 00:08:16.963 Latency(us) 00:08:16.963 [2024-11-17T17:29:03.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.963 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:16.963 Verification LBA range: start 0x0 length 0x400 00:08:16.963 Nvme0n1 : 1.03 1684.10 105.26 0.00 0.00 37382.62 5922.51 33204.91 00:08:16.963 [2024-11-17T17:29:03.539Z] =================================================================================================================== 00:08:16.963 [2024-11-17T17:29:03.539Z] Total : 1684.10 105.26 0.00 0.00 37382.62 5922.51 33204.91 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.222 rmmod nvme_tcp 00:08:17.222 rmmod nvme_fabrics 00:08:17.222 rmmod nvme_keyring 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 617221 ']' 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 617221 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 617221 ']' 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 617221 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 617221 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 617221' 00:08:17.222 killing process with pid 617221 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 617221 00:08:17.222 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 617221 00:08:17.482 [2024-11-17 18:29:03.929287] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.482 18:29:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.022 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.022 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:20.022 00:08:20.022 real 0m8.599s 00:08:20.022 user 0m18.725s 00:08:20.022 sys 0m2.752s 00:08:20.022 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.022 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.022 ************************************ 00:08:20.022 END TEST nvmf_host_management 00:08:20.022 ************************************ 00:08:20.022 18:29:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:20.022 18:29:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.022 18:29:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.022 18:29:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.022 ************************************ 00:08:20.022 START TEST nvmf_lvol 00:08:20.022 ************************************ 00:08:20.022 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:20.022 * Looking for test storage... 00:08:20.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.023 --rc genhtml_branch_coverage=1 00:08:20.023 --rc genhtml_function_coverage=1 00:08:20.023 --rc genhtml_legend=1 00:08:20.023 --rc geninfo_all_blocks=1 00:08:20.023 --rc geninfo_unexecuted_blocks=1 00:08:20.023 00:08:20.023 ' 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.023 --rc genhtml_branch_coverage=1 00:08:20.023 --rc genhtml_function_coverage=1 00:08:20.023 --rc genhtml_legend=1 00:08:20.023 --rc geninfo_all_blocks=1 00:08:20.023 --rc geninfo_unexecuted_blocks=1 00:08:20.023 00:08:20.023 ' 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.023 --rc genhtml_branch_coverage=1 00:08:20.023 --rc genhtml_function_coverage=1 00:08:20.023 --rc genhtml_legend=1 00:08:20.023 --rc geninfo_all_blocks=1 00:08:20.023 --rc geninfo_unexecuted_blocks=1 00:08:20.023 00:08:20.023 ' 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.023 --rc genhtml_branch_coverage=1 00:08:20.023 --rc genhtml_function_coverage=1 00:08:20.023 --rc genhtml_legend=1 00:08:20.023 --rc geninfo_all_blocks=1 00:08:20.023 --rc geninfo_unexecuted_blocks=1 00:08:20.023 00:08:20.023 ' 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:20.023 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.024 18:29:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:21.931 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:21.931 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:21.931 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:21.931 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.931 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:21.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:08:21.932 00:08:21.932 --- 10.0.0.2 ping statistics --- 00:08:21.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.932 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:08:21.932 00:08:21.932 --- 10.0.0.1 ping statistics --- 00:08:21.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.932 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=619752 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 619752 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 619752 ']' 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.932 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:22.191 [2024-11-17 18:29:08.540348] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:22.191 [2024-11-17 18:29:08.540431] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.191 [2024-11-17 18:29:08.609442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.191 [2024-11-17 18:29:08.651877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.191 [2024-11-17 18:29:08.651932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.191 [2024-11-17 18:29:08.651953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.191 [2024-11-17 18:29:08.651969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.191 [2024-11-17 18:29:08.651982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.191 [2024-11-17 18:29:08.653446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.191 [2024-11-17 18:29:08.653564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.191 [2024-11-17 18:29:08.653555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.191 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.191 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:22.191 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:22.191 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:22.191 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:22.449 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.449 18:29:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:22.707 [2024-11-17 18:29:09.042059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.707 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:22.965 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:22.965 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:23.223 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:23.223 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:23.481 18:29:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:23.739 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b42bc3d7-fc33-48d0-8011-8633445dba47 00:08:23.739 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b42bc3d7-fc33-48d0-8011-8633445dba47 lvol 20 00:08:23.998 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5ff1be03-10b6-43c9-ab06-7a73988ff1f7 00:08:23.998 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:24.256 18:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ff1be03-10b6-43c9-ab06-7a73988ff1f7 00:08:24.514 18:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:24.772 [2024-11-17 18:29:11.289210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.772 18:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.030 18:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=620179 00:08:25.030 18:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:25.030 18:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:26.408 18:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5ff1be03-10b6-43c9-ab06-7a73988ff1f7 MY_SNAPSHOT 00:08:26.408 18:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e8a8fbcf-018b-4306-b6bc-a42c8e820ed6 00:08:26.408 18:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5ff1be03-10b6-43c9-ab06-7a73988ff1f7 30 00:08:26.667 18:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e8a8fbcf-018b-4306-b6bc-a42c8e820ed6 MY_CLONE 00:08:27.235 18:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1b3c83dc-f4cb-4963-8a46-b09143c1d1ce 00:08:27.235 18:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1b3c83dc-f4cb-4963-8a46-b09143c1d1ce 00:08:27.804 18:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 620179 00:08:35.923 Initializing NVMe Controllers 00:08:35.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:35.923 Controller IO queue size 128, less than required. 00:08:35.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:35.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:35.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:35.923 Initialization complete. Launching workers. 00:08:35.923 ======================================================== 00:08:35.923 Latency(us) 00:08:35.923 Device Information : IOPS MiB/s Average min max 00:08:35.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10599.00 41.40 12077.23 2031.72 131545.09 00:08:35.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10569.50 41.29 12117.64 2109.01 59431.55 00:08:35.923 ======================================================== 00:08:35.923 Total : 21168.50 82.69 12097.40 2031.72 131545.09 00:08:35.923 00:08:35.923 18:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:35.923 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ff1be03-10b6-43c9-ab06-7a73988ff1f7 00:08:36.181 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b42bc3d7-fc33-48d0-8011-8633445dba47 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.440 rmmod nvme_tcp 00:08:36.440 rmmod nvme_fabrics 00:08:36.440 rmmod nvme_keyring 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 619752 ']' 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 619752 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 619752 ']' 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 619752 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 619752 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 619752' 00:08:36.440 killing process with pid 619752 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 619752 00:08:36.440 18:29:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 619752 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.700 18:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.240 00:08:39.240 real 0m19.137s 00:08:39.240 user 1m5.666s 00:08:39.240 sys 0m5.383s 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:39.240 ************************************ 00:08:39.240 END TEST nvmf_lvol 00:08:39.240 ************************************ 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.240 ************************************ 00:08:39.240 START TEST nvmf_lvs_grow 00:08:39.240 ************************************ 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:39.240 * Looking for test storage... 00:08:39.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:39.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.240 --rc genhtml_branch_coverage=1 00:08:39.240 --rc genhtml_function_coverage=1 00:08:39.240 --rc genhtml_legend=1 00:08:39.240 --rc geninfo_all_blocks=1 00:08:39.240 --rc geninfo_unexecuted_blocks=1 00:08:39.240 00:08:39.240 ' 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:39.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.240 --rc genhtml_branch_coverage=1 00:08:39.240 --rc genhtml_function_coverage=1 00:08:39.240 --rc genhtml_legend=1 00:08:39.240 --rc geninfo_all_blocks=1 00:08:39.240 --rc geninfo_unexecuted_blocks=1 00:08:39.240 00:08:39.240 ' 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:39.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.240 --rc genhtml_branch_coverage=1 00:08:39.240 --rc genhtml_function_coverage=1 00:08:39.240 --rc genhtml_legend=1 00:08:39.240 --rc geninfo_all_blocks=1 00:08:39.240 --rc geninfo_unexecuted_blocks=1 00:08:39.240 00:08:39.240 ' 00:08:39.240 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:39.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.241 --rc genhtml_branch_coverage=1 00:08:39.241 --rc genhtml_function_coverage=1 00:08:39.241 --rc genhtml_legend=1 00:08:39.241 --rc geninfo_all_blocks=1 00:08:39.241 --rc geninfo_unexecuted_blocks=1 00:08:39.241 00:08:39.241 ' 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.241 18:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:41.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:41.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.150 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:41.151 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:41.151 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.151 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.410 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.410 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:41.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:08:41.411 00:08:41.411 --- 10.0.0.2 ping statistics --- 00:08:41.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.411 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:08:41.411 00:08:41.411 --- 10.0.0.1 ping statistics --- 00:08:41.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.411 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=623466 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 623466 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 623466 ']' 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.411 18:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.411 [2024-11-17 18:29:27.882902] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:41.411 [2024-11-17 18:29:27.882995] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.411 [2024-11-17 18:29:27.954734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.669 [2024-11-17 18:29:27.998088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.669 [2024-11-17 18:29:27.998139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.669 [2024-11-17 18:29:27.998159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.669 [2024-11-17 18:29:27.998175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.669 [2024-11-17 18:29:27.998203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.669 [2024-11-17 18:29:27.998883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.669 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.669 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:41.669 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:41.669 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:41.669 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.669 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.669 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:41.928 [2024-11-17 18:29:28.388380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.928 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.929 ************************************ 00:08:41.929 START TEST lvs_grow_clean 00:08:41.929 ************************************ 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.929 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.188 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:42.188 18:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:42.447 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=770878d6-e901-49e6-aea2-74933d30c66b 00:08:42.447 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 770878d6-e901-49e6-aea2-74933d30c66b 00:08:42.447 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:43.016 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:43.016 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:43.016 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 770878d6-e901-49e6-aea2-74933d30c66b lvol 150 00:08:43.016 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dd0764d3-479d-48cf-80ad-b8d5c2ea257b 00:08:43.016 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.016 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:43.275 [2024-11-17 18:29:29.816074] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:43.275 [2024-11-17 18:29:29.816177] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:43.275 true 00:08:43.275 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 770878d6-e901-49e6-aea2-74933d30c66b 00:08:43.275 18:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:43.534 18:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:43.534 18:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:44.103 18:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dd0764d3-479d-48cf-80ad-b8d5c2ea257b 00:08:44.103 18:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:44.363 [2024-11-17 18:29:30.915449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.363 18:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=623911 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 623911 /var/tmp/bdevperf.sock 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 623911 ']' 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:44.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:44.932 [2024-11-17 18:29:31.251213] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:08:44.932 [2024-11-17 18:29:31.251281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623911 ] 00:08:44.932 [2024-11-17 18:29:31.316598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.932 [2024-11-17 18:29:31.361344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:44.932 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:45.499 Nvme0n1 00:08:45.499 18:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:45.759 [ 00:08:45.759 { 00:08:45.759 "name": "Nvme0n1", 00:08:45.759 "aliases": [ 00:08:45.759 "dd0764d3-479d-48cf-80ad-b8d5c2ea257b" 00:08:45.759 ], 00:08:45.759 "product_name": "NVMe disk", 00:08:45.759 "block_size": 4096, 00:08:45.759 "num_blocks": 38912, 00:08:45.759 "uuid": "dd0764d3-479d-48cf-80ad-b8d5c2ea257b", 00:08:45.759 "numa_id": 0, 00:08:45.759 "assigned_rate_limits": { 00:08:45.759 "rw_ios_per_sec": 0, 00:08:45.759 "rw_mbytes_per_sec": 0, 00:08:45.759 "r_mbytes_per_sec": 0, 00:08:45.759 "w_mbytes_per_sec": 0 00:08:45.759 }, 00:08:45.759 "claimed": false, 00:08:45.759 "zoned": false, 00:08:45.759 "supported_io_types": { 00:08:45.759 "read": true, 00:08:45.759 "write": true, 00:08:45.759 "unmap": true, 00:08:45.759 "flush": true, 00:08:45.759 "reset": true, 00:08:45.759 "nvme_admin": true, 00:08:45.759 "nvme_io": true, 00:08:45.759 "nvme_io_md": false, 00:08:45.759 "write_zeroes": true, 00:08:45.759 "zcopy": false, 00:08:45.759 "get_zone_info": false, 00:08:45.759 "zone_management": false, 00:08:45.759 "zone_append": false, 00:08:45.759 "compare": true, 00:08:45.759 "compare_and_write": true, 00:08:45.759 "abort": true, 00:08:45.759 "seek_hole": false, 00:08:45.759 "seek_data": false, 00:08:45.759 "copy": true, 00:08:45.759 "nvme_iov_md": false 00:08:45.759 }, 00:08:45.759 "memory_domains": [ 00:08:45.759 { 00:08:45.759 "dma_device_id": "system", 00:08:45.759 "dma_device_type": 1 00:08:45.759 } 00:08:45.759 ], 00:08:45.759 "driver_specific": { 00:08:45.759 "nvme": [ 00:08:45.759 { 00:08:45.759 "trid": { 00:08:45.759 "trtype": "TCP", 00:08:45.759 "adrfam": "IPv4", 00:08:45.759 "traddr": "10.0.0.2", 00:08:45.759 "trsvcid": "4420", 00:08:45.759 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:45.759 }, 00:08:45.759 "ctrlr_data": { 00:08:45.759 "cntlid": 1, 00:08:45.759 "vendor_id": "0x8086", 00:08:45.759 "model_number": "SPDK bdev Controller", 00:08:45.759 "serial_number": "SPDK0", 00:08:45.759 "firmware_revision": "25.01", 00:08:45.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:45.759 "oacs": { 00:08:45.759 "security": 0, 00:08:45.759 "format": 0, 00:08:45.759 "firmware": 0, 00:08:45.759 "ns_manage": 0 00:08:45.759 }, 00:08:45.759 "multi_ctrlr": true, 00:08:45.759 "ana_reporting": false 00:08:45.759 }, 00:08:45.759 "vs": { 00:08:45.759 "nvme_version": "1.3" 00:08:45.759 }, 00:08:45.759 "ns_data": { 00:08:45.759 "id": 1, 00:08:45.759 "can_share": true 00:08:45.759 } 00:08:45.759 } 00:08:45.759 ], 00:08:45.759 "mp_policy": "active_passive" 00:08:45.759 } 00:08:45.759 } 00:08:45.759 ] 00:08:45.759 18:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=624041 00:08:45.759 18:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:45.759 18:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:45.759 Running I/O for 10 seconds... 00:08:46.696 Latency(us) 00:08:46.696 [2024-11-17T17:29:33.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.696 Nvme0n1 : 1.00 15052.00 58.80 0.00 0.00 0.00 0.00 0.00 00:08:46.696 [2024-11-17T17:29:33.272Z] =================================================================================================================== 00:08:46.696 [2024-11-17T17:29:33.272Z] Total : 15052.00 58.80 0.00 0.00 0.00 0.00 0.00 00:08:46.696 00:08:47.633 18:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 770878d6-e901-49e6-aea2-74933d30c66b 00:08:47.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.893 Nvme0n1 : 2.00 15273.00 59.66 0.00 0.00 0.00 0.00 0.00 00:08:47.893 [2024-11-17T17:29:34.469Z] =================================================================================================================== 00:08:47.893 [2024-11-17T17:29:34.469Z] Total : 15273.00 59.66 0.00 0.00 0.00 0.00 0.00 00:08:47.893 00:08:47.893 true 00:08:47.893 18:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 770878d6-e901-49e6-aea2-74933d30c66b 00:08:47.893 18:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:48.152 18:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:48.152 18:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:48.152 18:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 624041 00:08:48.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.720 Nvme0n1 : 3.00 15346.67 59.95 0.00 0.00 0.00 0.00 0.00 00:08:48.720 [2024-11-17T17:29:35.296Z] =================================================================================================================== 00:08:48.720 [2024-11-17T17:29:35.296Z] Total : 15346.67 59.95 0.00 0.00 0.00 0.00 0.00 00:08:48.720 00:08:49.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.659 Nvme0n1 : 4.00 15447.75 60.34 0.00 0.00 0.00 0.00 0.00 00:08:49.659 [2024-11-17T17:29:36.235Z] =================================================================================================================== 00:08:49.659 [2024-11-17T17:29:36.235Z] Total : 15447.75 60.34 0.00 0.00 0.00 0.00 0.00 00:08:49.659 00:08:51.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.036 Nvme0n1 : 5.00 15521.40 60.63 0.00 0.00 0.00 0.00 0.00 00:08:51.036 [2024-11-17T17:29:37.612Z] =================================================================================================================== 00:08:51.036 [2024-11-17T17:29:37.612Z] Total : 15521.40 60.63 0.00 0.00 0.00 0.00 0.00 00:08:51.036 00:08:51.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.973 Nvme0n1 : 6.00 15539.00 60.70 0.00 0.00 0.00 0.00 0.00 00:08:51.973 [2024-11-17T17:29:38.549Z] =================================================================================================================== 00:08:51.973 [2024-11-17T17:29:38.549Z] Total : 15539.00 60.70 0.00 0.00 0.00 0.00 0.00 00:08:51.973 00:08:52.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.988 Nvme0n1 : 7.00 15578.14 60.85 0.00 0.00 0.00 0.00 0.00 00:08:52.988 [2024-11-17T17:29:39.564Z] =================================================================================================================== 00:08:52.988 [2024-11-17T17:29:39.564Z] Total : 15578.14 60.85 0.00 0.00 0.00 0.00 0.00 00:08:52.988 00:08:53.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.922 Nvme0n1 : 8.00 15603.62 60.95 0.00 0.00 0.00 0.00 0.00 00:08:53.922 [2024-11-17T17:29:40.498Z] =================================================================================================================== 00:08:53.922 [2024-11-17T17:29:40.498Z] Total : 15603.62 60.95 0.00 0.00 0.00 0.00 0.00 00:08:53.922 00:08:54.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.857 Nvme0n1 : 9.00 15633.78 61.07 0.00 0.00 0.00 0.00 0.00 00:08:54.857 [2024-11-17T17:29:41.433Z] =================================================================================================================== 00:08:54.857 [2024-11-17T17:29:41.433Z] Total : 15633.78 61.07 0.00 0.00 0.00 0.00 0.00 00:08:54.857 00:08:55.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.791 Nvme0n1 : 10.00 15670.60 61.21 0.00 0.00 0.00 0.00 0.00 00:08:55.791 [2024-11-17T17:29:42.367Z] =================================================================================================================== 00:08:55.791 [2024-11-17T17:29:42.367Z] Total : 15670.60 61.21 0.00 0.00 0.00 0.00 0.00 00:08:55.791 00:08:55.791 00:08:55.791 Latency(us) 00:08:55.791 [2024-11-17T17:29:42.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.791 Nvme0n1 : 10.01 15671.60 61.22 0.00 0.00 8163.40 4417.61 15825.73 00:08:55.791 [2024-11-17T17:29:42.367Z] =================================================================================================================== 00:08:55.791 [2024-11-17T17:29:42.367Z] Total : 15671.60 61.22 0.00 0.00 8163.40 4417.61 15825.73 00:08:55.791 { 00:08:55.791 "results": [ 00:08:55.791 { 00:08:55.791 "job": "Nvme0n1", 00:08:55.791 "core_mask": "0x2", 00:08:55.791 "workload": "randwrite", 00:08:55.791 "status": "finished", 00:08:55.791 "queue_depth": 128, 00:08:55.791 "io_size": 4096, 00:08:55.791 "runtime": 10.007531, 00:08:55.791 "iops": 15671.59771975725, 00:08:55.791 "mibps": 61.21717859280176, 00:08:55.791 "io_failed": 0, 00:08:55.791 "io_timeout": 0, 00:08:55.791 "avg_latency_us": 8163.396044867445, 00:08:55.791 "min_latency_us": 4417.6118518518515, 00:08:55.791 "max_latency_us": 15825.730370370371 00:08:55.791 } 00:08:55.791 ], 00:08:55.791 "core_count": 1 00:08:55.791 } 00:08:55.791 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 623911 00:08:55.791 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 623911 ']' 00:08:55.791 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 623911 00:08:55.791 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:55.791 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.791 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 623911 00:08:55.791 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:55.791 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:55.791 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 623911' 00:08:55.791 killing process with pid 623911 00:08:55.791 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 623911 00:08:55.791 Received shutdown signal, test time was about 10.000000 seconds 00:08:55.791 00:08:55.791 Latency(us) 00:08:55.791 [2024-11-17T17:29:42.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.791 [2024-11-17T17:29:42.367Z] =================================================================================================================== 00:08:55.791 [2024-11-17T17:29:42.367Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:55.791 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 623911 00:08:56.048 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.305 18:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:56.562 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 770878d6-e901-49e6-aea2-74933d30c66b 00:08:56.562 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:56.821 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:56.821 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:56.821 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:57.079 [2024-11-17 18:29:43.579604] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 770878d6-e901-49e6-aea2-74933d30c66b 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 770878d6-e901-49e6-aea2-74933d30c66b 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:57.079 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 770878d6-e901-49e6-aea2-74933d30c66b 00:08:57.337 request: 00:08:57.337 { 00:08:57.337 "uuid": "770878d6-e901-49e6-aea2-74933d30c66b", 00:08:57.337 "method": "bdev_lvol_get_lvstores", 00:08:57.337 "req_id": 1 00:08:57.337 } 00:08:57.337 Got JSON-RPC error response 00:08:57.337 response: 00:08:57.337 { 00:08:57.337 "code": -19, 00:08:57.337 "message": "No such device" 00:08:57.337 } 00:08:57.337 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:57.337 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:57.337 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:57.337 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:57.337 18:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.595 aio_bdev 00:08:57.595 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dd0764d3-479d-48cf-80ad-b8d5c2ea257b 00:08:57.595 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=dd0764d3-479d-48cf-80ad-b8d5c2ea257b 00:08:57.595 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.595 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:57.595 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.595 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.595 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:57.854 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dd0764d3-479d-48cf-80ad-b8d5c2ea257b -t 2000 00:08:58.112 [ 00:08:58.112 { 00:08:58.112 "name": "dd0764d3-479d-48cf-80ad-b8d5c2ea257b", 00:08:58.112 "aliases": [ 00:08:58.112 "lvs/lvol" 00:08:58.112 ], 00:08:58.112 "product_name": "Logical Volume", 00:08:58.112 "block_size": 4096, 00:08:58.112 "num_blocks": 38912, 00:08:58.112 "uuid": "dd0764d3-479d-48cf-80ad-b8d5c2ea257b", 00:08:58.112 "assigned_rate_limits": { 00:08:58.112 "rw_ios_per_sec": 0, 00:08:58.112 "rw_mbytes_per_sec": 0, 00:08:58.112 "r_mbytes_per_sec": 0, 00:08:58.112 "w_mbytes_per_sec": 0 00:08:58.112 }, 00:08:58.112 "claimed": false, 00:08:58.112 "zoned": false, 00:08:58.112 "supported_io_types": { 00:08:58.112 "read": true, 00:08:58.112 "write": true, 00:08:58.112 "unmap": true, 00:08:58.112 "flush": false, 00:08:58.112 "reset": true, 00:08:58.112 "nvme_admin": false, 00:08:58.112 "nvme_io": false, 00:08:58.112 "nvme_io_md": false, 00:08:58.112 "write_zeroes": true, 00:08:58.112 "zcopy": false, 00:08:58.112 "get_zone_info": false, 00:08:58.112 "zone_management": false, 00:08:58.112 "zone_append": false, 00:08:58.112 "compare": false, 00:08:58.112 "compare_and_write": false, 00:08:58.112 "abort": false, 00:08:58.112 "seek_hole": true, 00:08:58.112 "seek_data": true, 00:08:58.112 "copy": false, 00:08:58.112 "nvme_iov_md": false 00:08:58.112 }, 00:08:58.112 "driver_specific": { 00:08:58.112 "lvol": { 00:08:58.112 "lvol_store_uuid": "770878d6-e901-49e6-aea2-74933d30c66b", 00:08:58.113 "base_bdev": "aio_bdev", 00:08:58.113 "thin_provision": false, 00:08:58.113 "num_allocated_clusters": 38, 00:08:58.113 "snapshot": false, 00:08:58.113 "clone": false, 00:08:58.113 "esnap_clone": false 00:08:58.113 } 00:08:58.113 } 00:08:58.113 } 00:08:58.113 ] 00:08:58.113 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:58.113 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 770878d6-e901-49e6-aea2-74933d30c66b 00:08:58.113 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:58.371 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:58.371 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 770878d6-e901-49e6-aea2-74933d30c66b 00:08:58.371 18:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:58.937 18:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:58.937 18:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dd0764d3-479d-48cf-80ad-b8d5c2ea257b 00:08:58.937 18:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 770878d6-e901-49e6-aea2-74933d30c66b 00:08:59.196 18:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:59.455 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.713 00:08:59.713 real 0m17.611s 00:08:59.713 user 0m17.226s 00:08:59.713 sys 0m1.794s 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:59.713 ************************************ 00:08:59.713 END TEST lvs_grow_clean 00:08:59.713 ************************************ 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.713 ************************************ 00:08:59.713 START TEST lvs_grow_dirty 00:08:59.713 ************************************ 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.713 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.972 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:59.972 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:00.230 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:00.230 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:00.230 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:00.489 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:00.489 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:00.489 18:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab lvol 150 00:09:00.747 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f633ecaf-104b-4866-ac0f-8da35d5f3316 00:09:00.747 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:00.747 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:01.006 [2024-11-17 18:29:47.454071] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:01.006 [2024-11-17 18:29:47.454165] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:01.006 true 00:09:01.006 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:01.006 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:01.264 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:01.264 18:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:01.523 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f633ecaf-104b-4866-ac0f-8da35d5f3316 00:09:01.781 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:02.040 [2024-11-17 18:29:48.541376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.040 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:02.299 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=625999 00:09:02.299 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:02.299 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:02.299 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 625999 /var/tmp/bdevperf.sock 00:09:02.299 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 625999 ']' 00:09:02.299 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:02.299 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.299 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:02.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:02.299 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.299 18:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:02.299 [2024-11-17 18:29:48.871597] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:02.299 [2024-11-17 18:29:48.871669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625999 ] 00:09:02.557 [2024-11-17 18:29:48.941424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.557 [2024-11-17 18:29:48.990312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.557 18:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.557 18:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:02.557 18:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:03.127 Nvme0n1 00:09:03.127 18:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:03.386 [ 00:09:03.386 { 00:09:03.386 "name": "Nvme0n1", 00:09:03.386 "aliases": [ 00:09:03.386 "f633ecaf-104b-4866-ac0f-8da35d5f3316" 00:09:03.386 ], 00:09:03.386 "product_name": "NVMe disk", 00:09:03.386 "block_size": 4096, 00:09:03.386 "num_blocks": 38912, 00:09:03.386 "uuid": "f633ecaf-104b-4866-ac0f-8da35d5f3316", 00:09:03.386 "numa_id": 0, 00:09:03.386 "assigned_rate_limits": { 00:09:03.386 "rw_ios_per_sec": 0, 00:09:03.386 "rw_mbytes_per_sec": 0, 00:09:03.386 "r_mbytes_per_sec": 0, 00:09:03.386 "w_mbytes_per_sec": 0 00:09:03.386 }, 00:09:03.386 "claimed": false, 00:09:03.386 "zoned": false, 00:09:03.386 "supported_io_types": { 00:09:03.386 "read": true, 00:09:03.386 "write": true, 00:09:03.386 "unmap": true, 00:09:03.386 "flush": true, 00:09:03.386 "reset": true, 00:09:03.386 "nvme_admin": true, 00:09:03.386 "nvme_io": true, 00:09:03.386 "nvme_io_md": false, 00:09:03.386 "write_zeroes": true, 00:09:03.386 "zcopy": false, 00:09:03.386 "get_zone_info": false, 00:09:03.386 "zone_management": false, 00:09:03.386 "zone_append": false, 00:09:03.386 "compare": true, 00:09:03.386 "compare_and_write": true, 00:09:03.386 "abort": true, 00:09:03.386 "seek_hole": false, 00:09:03.386 "seek_data": false, 00:09:03.386 "copy": true, 00:09:03.386 "nvme_iov_md": false 00:09:03.386 }, 00:09:03.386 "memory_domains": [ 00:09:03.386 { 00:09:03.386 "dma_device_id": "system", 00:09:03.386 "dma_device_type": 1 00:09:03.386 } 00:09:03.386 ], 00:09:03.386 "driver_specific": { 00:09:03.386 "nvme": [ 00:09:03.386 { 00:09:03.386 "trid": { 00:09:03.386 "trtype": "TCP", 00:09:03.386 "adrfam": "IPv4", 00:09:03.386 "traddr": "10.0.0.2", 00:09:03.386 "trsvcid": "4420", 00:09:03.386 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:03.386 }, 00:09:03.386 "ctrlr_data": { 00:09:03.386 "cntlid": 1, 00:09:03.386 "vendor_id": "0x8086", 00:09:03.386 "model_number": "SPDK bdev Controller", 00:09:03.386 "serial_number": "SPDK0", 00:09:03.386 "firmware_revision": "25.01", 00:09:03.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:03.386 "oacs": { 00:09:03.386 "security": 0, 00:09:03.386 "format": 0, 00:09:03.386 "firmware": 0, 00:09:03.386 "ns_manage": 0 00:09:03.386 }, 00:09:03.386 "multi_ctrlr": true, 00:09:03.386 "ana_reporting": false 00:09:03.386 }, 00:09:03.386 "vs": { 00:09:03.386 "nvme_version": "1.3" 00:09:03.386 }, 00:09:03.386 "ns_data": { 00:09:03.386 "id": 1, 00:09:03.386 "can_share": true 00:09:03.386 } 00:09:03.386 } 00:09:03.386 ], 00:09:03.386 "mp_policy": "active_passive" 00:09:03.386 } 00:09:03.386 } 00:09:03.386 ] 00:09:03.386 18:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=626118 00:09:03.386 18:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:03.386 18:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.386 Running I/O for 10 seconds... 00:09:04.322 Latency(us) 00:09:04.322 [2024-11-17T17:29:50.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.322 Nvme0n1 : 1.00 14646.00 57.21 0.00 0.00 0.00 0.00 0.00 00:09:04.322 [2024-11-17T17:29:50.898Z] =================================================================================================================== 00:09:04.322 [2024-11-17T17:29:50.898Z] Total : 14646.00 57.21 0.00 0.00 0.00 0.00 0.00 00:09:04.322 00:09:05.258 18:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:05.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.516 Nvme0n1 : 2.00 15006.50 58.62 0.00 0.00 0.00 0.00 0.00 00:09:05.516 [2024-11-17T17:29:52.092Z] =================================================================================================================== 00:09:05.516 [2024-11-17T17:29:52.092Z] Total : 15006.50 58.62 0.00 0.00 0.00 0.00 0.00 00:09:05.516 00:09:05.516 true 00:09:05.516 18:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:05.516 18:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:05.775 18:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:05.775 18:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:05.775 18:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 626118 00:09:06.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.341 Nvme0n1 : 3.00 15106.00 59.01 0.00 0.00 0.00 0.00 0.00 00:09:06.341 [2024-11-17T17:29:52.917Z] =================================================================================================================== 00:09:06.341 [2024-11-17T17:29:52.917Z] Total : 15106.00 59.01 0.00 0.00 0.00 0.00 0.00 00:09:06.341 00:09:07.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.276 Nvme0n1 : 4.00 15234.75 59.51 0.00 0.00 0.00 0.00 0.00 00:09:07.276 [2024-11-17T17:29:53.852Z] =================================================================================================================== 00:09:07.276 [2024-11-17T17:29:53.852Z] Total : 15234.75 59.51 0.00 0.00 0.00 0.00 0.00 00:09:07.276 00:09:08.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.651 Nvme0n1 : 5.00 15312.00 59.81 0.00 0.00 0.00 0.00 0.00 00:09:08.651 [2024-11-17T17:29:55.227Z] =================================================================================================================== 00:09:08.651 [2024-11-17T17:29:55.227Z] Total : 15312.00 59.81 0.00 0.00 0.00 0.00 0.00 00:09:08.651 00:09:09.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.584 Nvme0n1 : 6.00 15363.50 60.01 0.00 0.00 0.00 0.00 0.00 00:09:09.584 [2024-11-17T17:29:56.160Z] =================================================================================================================== 00:09:09.584 [2024-11-17T17:29:56.160Z] Total : 15363.50 60.01 0.00 0.00 0.00 0.00 0.00 00:09:09.584 00:09:10.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.515 Nvme0n1 : 7.00 15419.29 60.23 0.00 0.00 0.00 0.00 0.00 00:09:10.515 [2024-11-17T17:29:57.091Z] =================================================================================================================== 00:09:10.515 [2024-11-17T17:29:57.091Z] Total : 15419.29 60.23 0.00 0.00 0.00 0.00 0.00 00:09:10.515 00:09:11.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.448 Nvme0n1 : 8.00 15453.38 60.36 0.00 0.00 0.00 0.00 0.00 00:09:11.448 [2024-11-17T17:29:58.024Z] =================================================================================================================== 00:09:11.448 [2024-11-17T17:29:58.024Z] Total : 15453.38 60.36 0.00 0.00 0.00 0.00 0.00 00:09:11.448 00:09:12.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.382 Nvme0n1 : 9.00 15500.22 60.55 0.00 0.00 0.00 0.00 0.00 00:09:12.382 [2024-11-17T17:29:58.958Z] =================================================================================================================== 00:09:12.382 [2024-11-17T17:29:58.958Z] Total : 15500.22 60.55 0.00 0.00 0.00 0.00 0.00 00:09:12.382 00:09:13.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.317 Nvme0n1 : 10.00 15525.00 60.64 0.00 0.00 0.00 0.00 0.00 00:09:13.317 [2024-11-17T17:29:59.893Z] =================================================================================================================== 00:09:13.317 [2024-11-17T17:29:59.893Z] Total : 15525.00 60.64 0.00 0.00 0.00 0.00 0.00 00:09:13.317 00:09:13.317 00:09:13.317 Latency(us) 00:09:13.317 [2024-11-17T17:29:59.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.317 Nvme0n1 : 10.00 15530.33 60.67 0.00 0.00 8237.67 3980.71 18932.62 00:09:13.317 [2024-11-17T17:29:59.893Z] =================================================================================================================== 00:09:13.317 [2024-11-17T17:29:59.893Z] Total : 15530.33 60.67 0.00 0.00 8237.67 3980.71 18932.62 00:09:13.317 { 00:09:13.317 "results": [ 00:09:13.317 { 00:09:13.317 "job": "Nvme0n1", 00:09:13.317 "core_mask": "0x2", 00:09:13.317 "workload": "randwrite", 00:09:13.317 "status": "finished", 00:09:13.317 "queue_depth": 128, 00:09:13.317 "io_size": 4096, 00:09:13.317 "runtime": 10.004808, 00:09:13.317 "iops": 15530.333015885963, 00:09:13.317 "mibps": 60.66536334330454, 00:09:13.317 "io_failed": 0, 00:09:13.317 "io_timeout": 0, 00:09:13.317 "avg_latency_us": 8237.66955359999, 00:09:13.317 "min_latency_us": 3980.705185185185, 00:09:13.317 "max_latency_us": 18932.62222222222 00:09:13.317 } 00:09:13.317 ], 00:09:13.317 "core_count": 1 00:09:13.317 } 00:09:13.317 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 625999 00:09:13.317 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 625999 ']' 00:09:13.317 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 625999 00:09:13.317 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:13.317 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.317 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 625999 00:09:13.575 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:13.575 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:13.575 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 625999' 00:09:13.575 killing process with pid 625999 00:09:13.575 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 625999 00:09:13.575 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.575 00:09:13.575 Latency(us) 00:09:13.575 [2024-11-17T17:30:00.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.575 [2024-11-17T17:30:00.151Z] =================================================================================================================== 00:09:13.575 [2024-11-17T17:30:00.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.575 18:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 625999 00:09:13.575 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:13.833 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:14.400 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:14.400 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:14.658 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:14.658 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:14.658 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 623466 00:09:14.658 18:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 623466 00:09:14.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 623466 Killed "${NVMF_APP[@]}" "$@" 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=627533 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 627533 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 627533 ']' 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.658 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:14.658 [2024-11-17 18:30:01.069311] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:14.658 [2024-11-17 18:30:01.069404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.658 [2024-11-17 18:30:01.144584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.658 [2024-11-17 18:30:01.190891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.658 [2024-11-17 18:30:01.190964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.658 [2024-11-17 18:30:01.190985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.658 [2024-11-17 18:30:01.191003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.658 [2024-11-17 18:30:01.191031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.658 [2024-11-17 18:30:01.191596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.916 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.916 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:14.916 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.916 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:14.916 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:14.916 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.916 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:15.175 [2024-11-17 18:30:01.579109] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:15.175 [2024-11-17 18:30:01.579246] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:15.175 [2024-11-17 18:30:01.579309] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:15.175 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:15.175 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f633ecaf-104b-4866-ac0f-8da35d5f3316 00:09:15.175 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f633ecaf-104b-4866-ac0f-8da35d5f3316 00:09:15.175 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.175 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:15.175 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.175 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.175 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:15.434 18:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f633ecaf-104b-4866-ac0f-8da35d5f3316 -t 2000 00:09:15.692 [ 00:09:15.692 { 00:09:15.692 "name": "f633ecaf-104b-4866-ac0f-8da35d5f3316", 00:09:15.692 "aliases": [ 00:09:15.692 "lvs/lvol" 00:09:15.692 ], 00:09:15.692 "product_name": "Logical Volume", 00:09:15.692 "block_size": 4096, 00:09:15.692 "num_blocks": 38912, 00:09:15.692 "uuid": "f633ecaf-104b-4866-ac0f-8da35d5f3316", 00:09:15.692 "assigned_rate_limits": { 00:09:15.692 "rw_ios_per_sec": 0, 00:09:15.692 "rw_mbytes_per_sec": 0, 00:09:15.692 "r_mbytes_per_sec": 0, 00:09:15.692 "w_mbytes_per_sec": 0 00:09:15.692 }, 00:09:15.692 "claimed": false, 00:09:15.692 "zoned": false, 00:09:15.692 "supported_io_types": { 00:09:15.692 "read": true, 00:09:15.692 "write": true, 00:09:15.692 "unmap": true, 00:09:15.692 "flush": false, 00:09:15.692 "reset": true, 00:09:15.692 "nvme_admin": false, 00:09:15.692 "nvme_io": false, 00:09:15.692 "nvme_io_md": false, 00:09:15.692 "write_zeroes": true, 00:09:15.692 "zcopy": false, 00:09:15.692 "get_zone_info": false, 00:09:15.692 "zone_management": false, 00:09:15.692 "zone_append": false, 00:09:15.692 "compare": false, 00:09:15.692 "compare_and_write": false, 00:09:15.692 "abort": false, 00:09:15.692 "seek_hole": true, 00:09:15.692 "seek_data": true, 00:09:15.692 "copy": false, 00:09:15.692 "nvme_iov_md": false 00:09:15.692 }, 00:09:15.692 "driver_specific": { 00:09:15.692 "lvol": { 00:09:15.692 "lvol_store_uuid": "41d4098d-eb79-4991-b5f3-ae6a7927d4ab", 00:09:15.692 "base_bdev": "aio_bdev", 00:09:15.692 "thin_provision": false, 00:09:15.692 "num_allocated_clusters": 38, 00:09:15.692 "snapshot": false, 00:09:15.692 "clone": false, 00:09:15.692 "esnap_clone": false 00:09:15.692 } 00:09:15.692 } 00:09:15.692 } 00:09:15.692 ] 00:09:15.692 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:15.692 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:15.692 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:15.950 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:15.950 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:15.950 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:16.207 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:16.207 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.465 [2024-11-17 18:30:02.960653] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:16.465 18:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:16.724 request: 00:09:16.724 { 00:09:16.724 "uuid": "41d4098d-eb79-4991-b5f3-ae6a7927d4ab", 00:09:16.724 "method": "bdev_lvol_get_lvstores", 00:09:16.724 "req_id": 1 00:09:16.724 } 00:09:16.724 Got JSON-RPC error response 00:09:16.724 response: 00:09:16.724 { 00:09:16.724 "code": -19, 00:09:16.724 "message": "No such device" 00:09:16.724 } 00:09:16.724 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:16.724 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:16.724 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:16.724 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:16.724 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.981 aio_bdev 00:09:16.981 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f633ecaf-104b-4866-ac0f-8da35d5f3316 00:09:16.981 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f633ecaf-104b-4866-ac0f-8da35d5f3316 00:09:16.981 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.981 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:16.981 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.981 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.981 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:17.547 18:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f633ecaf-104b-4866-ac0f-8da35d5f3316 -t 2000 00:09:17.547 [ 00:09:17.547 { 00:09:17.547 "name": "f633ecaf-104b-4866-ac0f-8da35d5f3316", 00:09:17.547 "aliases": [ 00:09:17.547 "lvs/lvol" 00:09:17.547 ], 00:09:17.547 "product_name": "Logical Volume", 00:09:17.547 "block_size": 4096, 00:09:17.547 "num_blocks": 38912, 00:09:17.547 "uuid": "f633ecaf-104b-4866-ac0f-8da35d5f3316", 00:09:17.547 "assigned_rate_limits": { 00:09:17.547 "rw_ios_per_sec": 0, 00:09:17.547 "rw_mbytes_per_sec": 0, 00:09:17.547 "r_mbytes_per_sec": 0, 00:09:17.547 "w_mbytes_per_sec": 0 00:09:17.547 }, 00:09:17.547 "claimed": false, 00:09:17.547 "zoned": false, 00:09:17.547 "supported_io_types": { 00:09:17.547 "read": true, 00:09:17.547 "write": true, 00:09:17.547 "unmap": true, 00:09:17.547 "flush": false, 00:09:17.547 "reset": true, 00:09:17.547 "nvme_admin": false, 00:09:17.547 "nvme_io": false, 00:09:17.547 "nvme_io_md": false, 00:09:17.547 "write_zeroes": true, 00:09:17.547 "zcopy": false, 00:09:17.547 "get_zone_info": false, 00:09:17.547 "zone_management": false, 00:09:17.547 "zone_append": false, 00:09:17.547 "compare": false, 00:09:17.547 "compare_and_write": false, 00:09:17.547 "abort": false, 00:09:17.547 "seek_hole": true, 00:09:17.547 "seek_data": true, 00:09:17.547 "copy": false, 00:09:17.547 "nvme_iov_md": false 00:09:17.547 }, 00:09:17.547 "driver_specific": { 00:09:17.547 "lvol": { 00:09:17.547 "lvol_store_uuid": "41d4098d-eb79-4991-b5f3-ae6a7927d4ab", 00:09:17.547 "base_bdev": "aio_bdev", 00:09:17.547 "thin_provision": false, 00:09:17.547 "num_allocated_clusters": 38, 00:09:17.547 "snapshot": false, 00:09:17.547 "clone": false, 00:09:17.547 "esnap_clone": false 00:09:17.547 } 00:09:17.547 } 00:09:17.547 } 00:09:17.547 ] 00:09:17.547 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:17.547 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:17.547 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:18.113 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:18.113 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:18.113 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:18.113 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:18.113 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f633ecaf-104b-4866-ac0f-8da35d5f3316 00:09:18.679 18:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 41d4098d-eb79-4991-b5f3-ae6a7927d4ab 00:09:18.679 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:19.246 00:09:19.246 real 0m19.446s 00:09:19.246 user 0m49.096s 00:09:19.246 sys 0m4.565s 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:19.246 ************************************ 00:09:19.246 END TEST lvs_grow_dirty 00:09:19.246 ************************************ 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:19.246 nvmf_trace.0 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.246 rmmod nvme_tcp 00:09:19.246 rmmod nvme_fabrics 00:09:19.246 rmmod nvme_keyring 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 627533 ']' 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 627533 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 627533 ']' 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 627533 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 627533 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 627533' 00:09:19.246 killing process with pid 627533 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 627533 00:09:19.246 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 627533 00:09:19.506 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.524 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.524 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.524 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:19.524 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:19.524 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.524 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.524 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.524 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.524 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.524 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.524 18:30:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.467 18:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.467 00:09:21.467 real 0m42.712s 00:09:21.467 user 1m12.498s 00:09:21.467 sys 0m8.453s 00:09:21.467 18:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.467 18:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:21.467 ************************************ 00:09:21.467 END TEST nvmf_lvs_grow 00:09:21.467 ************************************ 00:09:21.467 18:30:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:21.467 18:30:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.467 18:30:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.467 18:30:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.467 ************************************ 00:09:21.467 START TEST nvmf_bdev_io_wait 00:09:21.467 ************************************ 00:09:21.467 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:21.727 * Looking for test storage... 00:09:21.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.727 --rc genhtml_branch_coverage=1 00:09:21.727 --rc genhtml_function_coverage=1 00:09:21.727 --rc genhtml_legend=1 00:09:21.727 --rc geninfo_all_blocks=1 00:09:21.727 --rc geninfo_unexecuted_blocks=1 00:09:21.727 00:09:21.727 ' 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.727 --rc genhtml_branch_coverage=1 00:09:21.727 --rc genhtml_function_coverage=1 00:09:21.727 --rc genhtml_legend=1 00:09:21.727 --rc geninfo_all_blocks=1 00:09:21.727 --rc geninfo_unexecuted_blocks=1 00:09:21.727 00:09:21.727 ' 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.727 --rc genhtml_branch_coverage=1 00:09:21.727 --rc genhtml_function_coverage=1 00:09:21.727 --rc genhtml_legend=1 00:09:21.727 --rc geninfo_all_blocks=1 00:09:21.727 --rc geninfo_unexecuted_blocks=1 00:09:21.727 00:09:21.727 ' 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.727 --rc genhtml_branch_coverage=1 00:09:21.727 --rc genhtml_function_coverage=1 00:09:21.727 --rc genhtml_legend=1 00:09:21.727 --rc geninfo_all_blocks=1 00:09:21.727 --rc geninfo_unexecuted_blocks=1 00:09:21.727 00:09:21.727 ' 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.727 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.728 18:30:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:24.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:24.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.262 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:24.263 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:24.263 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:09:24.263 00:09:24.263 --- 10.0.0.2 ping statistics --- 00:09:24.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.263 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:09:24.263 00:09:24.263 --- 10.0.0.1 ping statistics --- 00:09:24.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.263 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=630743 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 630743 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 630743 ']' 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.263 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.263 [2024-11-17 18:30:10.664872] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:24.263 [2024-11-17 18:30:10.664956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.263 [2024-11-17 18:30:10.742516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.263 [2024-11-17 18:30:10.795351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.263 [2024-11-17 18:30:10.795399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.263 [2024-11-17 18:30:10.795420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.263 [2024-11-17 18:30:10.795437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.263 [2024-11-17 18:30:10.795452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.263 [2024-11-17 18:30:10.797118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.263 [2024-11-17 18:30:10.797183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.263 [2024-11-17 18:30:10.797245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.263 [2024-11-17 18:30:10.797247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.521 18:30:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.521 [2024-11-17 18:30:11.010638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.521 Malloc0 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.521 [2024-11-17 18:30:11.061469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=630780 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=630782 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:24.521 { 00:09:24.521 "params": { 00:09:24.521 "name": "Nvme$subsystem", 00:09:24.521 "trtype": "$TEST_TRANSPORT", 00:09:24.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.521 "adrfam": "ipv4", 00:09:24.521 "trsvcid": "$NVMF_PORT", 00:09:24.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.521 "hdgst": ${hdgst:-false}, 00:09:24.521 "ddgst": ${ddgst:-false} 00:09:24.521 }, 00:09:24.521 "method": "bdev_nvme_attach_controller" 00:09:24.521 } 00:09:24.521 EOF 00:09:24.521 )") 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=630784 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:24.521 { 00:09:24.521 "params": { 00:09:24.521 "name": "Nvme$subsystem", 00:09:24.521 "trtype": "$TEST_TRANSPORT", 00:09:24.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.521 "adrfam": "ipv4", 00:09:24.521 "trsvcid": "$NVMF_PORT", 00:09:24.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.521 "hdgst": ${hdgst:-false}, 00:09:24.521 "ddgst": ${ddgst:-false} 00:09:24.521 }, 00:09:24.521 "method": "bdev_nvme_attach_controller" 00:09:24.521 } 00:09:24.521 EOF 00:09:24.521 )") 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=630787 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:24.521 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:24.522 { 00:09:24.522 "params": { 00:09:24.522 "name": "Nvme$subsystem", 00:09:24.522 "trtype": "$TEST_TRANSPORT", 00:09:24.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.522 "adrfam": "ipv4", 00:09:24.522 "trsvcid": "$NVMF_PORT", 00:09:24.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.522 "hdgst": ${hdgst:-false}, 00:09:24.522 "ddgst": ${ddgst:-false} 00:09:24.522 }, 00:09:24.522 "method": "bdev_nvme_attach_controller" 00:09:24.522 } 00:09:24.522 EOF 00:09:24.522 )") 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:24.522 { 00:09:24.522 "params": { 00:09:24.522 "name": "Nvme$subsystem", 00:09:24.522 "trtype": "$TEST_TRANSPORT", 00:09:24.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.522 "adrfam": "ipv4", 00:09:24.522 "trsvcid": "$NVMF_PORT", 00:09:24.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.522 "hdgst": ${hdgst:-false}, 00:09:24.522 "ddgst": ${ddgst:-false} 00:09:24.522 }, 00:09:24.522 "method": "bdev_nvme_attach_controller" 00:09:24.522 } 00:09:24.522 EOF 00:09:24.522 )") 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 630780 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:24.522 "params": { 00:09:24.522 "name": "Nvme1", 00:09:24.522 "trtype": "tcp", 00:09:24.522 "traddr": "10.0.0.2", 00:09:24.522 "adrfam": "ipv4", 00:09:24.522 "trsvcid": "4420", 00:09:24.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.522 "hdgst": false, 00:09:24.522 "ddgst": false 00:09:24.522 }, 00:09:24.522 "method": "bdev_nvme_attach_controller" 00:09:24.522 }' 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:24.522 "params": { 00:09:24.522 "name": "Nvme1", 00:09:24.522 "trtype": "tcp", 00:09:24.522 "traddr": "10.0.0.2", 00:09:24.522 "adrfam": "ipv4", 00:09:24.522 "trsvcid": "4420", 00:09:24.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.522 "hdgst": false, 00:09:24.522 "ddgst": false 00:09:24.522 }, 00:09:24.522 "method": "bdev_nvme_attach_controller" 00:09:24.522 }' 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:24.522 "params": { 00:09:24.522 "name": "Nvme1", 00:09:24.522 "trtype": "tcp", 00:09:24.522 "traddr": "10.0.0.2", 00:09:24.522 "adrfam": "ipv4", 00:09:24.522 "trsvcid": "4420", 00:09:24.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.522 "hdgst": false, 00:09:24.522 "ddgst": false 00:09:24.522 }, 00:09:24.522 "method": "bdev_nvme_attach_controller" 00:09:24.522 }' 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:24.522 18:30:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:24.522 "params": { 00:09:24.522 "name": "Nvme1", 00:09:24.522 "trtype": "tcp", 00:09:24.522 "traddr": "10.0.0.2", 00:09:24.522 "adrfam": "ipv4", 00:09:24.522 "trsvcid": "4420", 00:09:24.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.522 "hdgst": false, 00:09:24.522 "ddgst": false 00:09:24.522 }, 00:09:24.522 "method": "bdev_nvme_attach_controller" 00:09:24.522 }' 00:09:24.779 [2024-11-17 18:30:11.113174] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:24.779 [2024-11-17 18:30:11.113174] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:24.779 [2024-11-17 18:30:11.113174] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:24.780 [2024-11-17 18:30:11.113264] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 18:30:11.113264] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 18:30:11.113264] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:24.780 --proc-type=auto ] 00:09:24.780 --proc-type=auto ] 00:09:24.780 [2024-11-17 18:30:11.113647] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:24.780 [2024-11-17 18:30:11.113740] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:24.780 [2024-11-17 18:30:11.298808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.780 [2024-11-17 18:30:11.340997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:25.037 [2024-11-17 18:30:11.397442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.037 [2024-11-17 18:30:11.439548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:25.037 [2024-11-17 18:30:11.521035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.037 [2024-11-17 18:30:11.566552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:25.037 [2024-11-17 18:30:11.578714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.295 [2024-11-17 18:30:11.617680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:25.295 Running I/O for 1 seconds... 00:09:25.295 Running I/O for 1 seconds... 00:09:25.295 Running I/O for 1 seconds... 00:09:25.295 Running I/O for 1 seconds... 00:09:26.670 5900.00 IOPS, 23.05 MiB/s 00:09:26.670 Latency(us) 00:09:26.670 [2024-11-17T17:30:13.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.670 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:26.670 Nvme1n1 : 1.02 5926.01 23.15 0.00 0.00 21382.09 5849.69 33981.63 00:09:26.670 [2024-11-17T17:30:13.246Z] =================================================================================================================== 00:09:26.670 [2024-11-17T17:30:13.246Z] Total : 5926.01 23.15 0.00 0.00 21382.09 5849.69 33981.63 00:09:26.670 5605.00 IOPS, 21.89 MiB/s [2024-11-17T17:30:13.246Z] 9101.00 IOPS, 35.55 MiB/s 00:09:26.670 Latency(us) 00:09:26.670 [2024-11-17T17:30:13.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.670 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:26.670 Nvme1n1 : 1.01 5694.25 22.24 0.00 0.00 22382.96 6747.78 46409.20 00:09:26.670 [2024-11-17T17:30:13.246Z] =================================================================================================================== 00:09:26.670 [2024-11-17T17:30:13.246Z] Total : 5694.25 22.24 0.00 0.00 22382.96 6747.78 46409.20 00:09:26.670 00:09:26.670 Latency(us) 00:09:26.670 [2024-11-17T17:30:13.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.670 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:26.670 Nvme1n1 : 1.01 9157.73 35.77 0.00 0.00 13917.56 6505.05 24369.68 00:09:26.670 [2024-11-17T17:30:13.246Z] =================================================================================================================== 00:09:26.670 [2024-11-17T17:30:13.246Z] Total : 9157.73 35.77 0.00 0.00 13917.56 6505.05 24369.68 00:09:26.670 194184.00 IOPS, 758.53 MiB/s 00:09:26.670 Latency(us) 00:09:26.670 [2024-11-17T17:30:13.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.670 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:26.670 Nvme1n1 : 1.00 193822.96 757.12 0.00 0.00 656.88 295.82 1844.72 00:09:26.670 [2024-11-17T17:30:13.246Z] =================================================================================================================== 00:09:26.670 [2024-11-17T17:30:13.246Z] Total : 193822.96 757.12 0.00 0.00 656.88 295.82 1844.72 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 630782 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 630784 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 630787 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.670 rmmod nvme_tcp 00:09:26.670 rmmod nvme_fabrics 00:09:26.670 rmmod nvme_keyring 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 630743 ']' 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 630743 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 630743 ']' 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 630743 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 630743 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 630743' 00:09:26.670 killing process with pid 630743 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 630743 00:09:26.670 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 630743 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.930 18:30:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.833 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:28.833 00:09:28.833 real 0m7.389s 00:09:28.833 user 0m16.345s 00:09:28.833 sys 0m3.538s 00:09:28.833 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.833 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:28.833 ************************************ 00:09:28.833 END TEST nvmf_bdev_io_wait 00:09:28.833 ************************************ 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.092 ************************************ 00:09:29.092 START TEST nvmf_queue_depth 00:09:29.092 ************************************ 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:29.092 * Looking for test storage... 00:09:29.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:29.092 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.093 --rc genhtml_branch_coverage=1 00:09:29.093 --rc genhtml_function_coverage=1 00:09:29.093 --rc genhtml_legend=1 00:09:29.093 --rc geninfo_all_blocks=1 00:09:29.093 --rc geninfo_unexecuted_blocks=1 00:09:29.093 00:09:29.093 ' 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.093 --rc genhtml_branch_coverage=1 00:09:29.093 --rc genhtml_function_coverage=1 00:09:29.093 --rc genhtml_legend=1 00:09:29.093 --rc geninfo_all_blocks=1 00:09:29.093 --rc geninfo_unexecuted_blocks=1 00:09:29.093 00:09:29.093 ' 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.093 --rc genhtml_branch_coverage=1 00:09:29.093 --rc genhtml_function_coverage=1 00:09:29.093 --rc genhtml_legend=1 00:09:29.093 --rc geninfo_all_blocks=1 00:09:29.093 --rc geninfo_unexecuted_blocks=1 00:09:29.093 00:09:29.093 ' 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.093 --rc genhtml_branch_coverage=1 00:09:29.093 --rc genhtml_function_coverage=1 00:09:29.093 --rc genhtml_legend=1 00:09:29.093 --rc geninfo_all_blocks=1 00:09:29.093 --rc geninfo_unexecuted_blocks=1 00:09:29.093 00:09:29.093 ' 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.093 18:30:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:31.630 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:31.630 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:31.630 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:31.630 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:31.631 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:09:31.631 00:09:31.631 --- 10.0.0.2 ping statistics --- 00:09:31.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.631 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:09:31.631 00:09:31.631 --- 10.0.0.1 ping statistics --- 00:09:31.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.631 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.631 18:30:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=633018 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 633018 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 633018 ']' 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.631 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.631 [2024-11-17 18:30:18.064102] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:31.631 [2024-11-17 18:30:18.064182] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.631 [2024-11-17 18:30:18.140020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.631 [2024-11-17 18:30:18.183189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.631 [2024-11-17 18:30:18.183247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.631 [2024-11-17 18:30:18.183275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.631 [2024-11-17 18:30:18.183286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.631 [2024-11-17 18:30:18.183295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.631 [2024-11-17 18:30:18.183903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.890 [2024-11-17 18:30:18.319990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.890 Malloc0 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.890 [2024-11-17 18:30:18.367593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=633089 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 633089 /var/tmp/bdevperf.sock 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 633089 ']' 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:31.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.890 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.890 [2024-11-17 18:30:18.416116] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:31.890 [2024-11-17 18:30:18.416192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633089 ] 00:09:32.149 [2024-11-17 18:30:18.483517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.149 [2024-11-17 18:30:18.528378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.149 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.149 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:32.149 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:32.149 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.149 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:32.407 NVMe0n1 00:09:32.407 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.407 18:30:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:32.407 Running I/O for 10 seconds... 00:09:34.715 8192.00 IOPS, 32.00 MiB/s [2024-11-17T17:30:22.227Z] 8158.00 IOPS, 31.87 MiB/s [2024-11-17T17:30:23.161Z] 8192.00 IOPS, 32.00 MiB/s [2024-11-17T17:30:24.096Z] 8197.25 IOPS, 32.02 MiB/s [2024-11-17T17:30:25.030Z] 8254.80 IOPS, 32.25 MiB/s [2024-11-17T17:30:25.965Z] 8322.67 IOPS, 32.51 MiB/s [2024-11-17T17:30:26.900Z] 8335.71 IOPS, 32.56 MiB/s [2024-11-17T17:30:28.275Z] 8321.38 IOPS, 32.51 MiB/s [2024-11-17T17:30:29.210Z] 8339.00 IOPS, 32.57 MiB/s [2024-11-17T17:30:29.210Z] 8374.10 IOPS, 32.71 MiB/s 00:09:42.634 Latency(us) 00:09:42.634 [2024-11-17T17:30:29.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.634 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:42.634 Verification LBA range: start 0x0 length 0x4000 00:09:42.634 NVMe0n1 : 10.10 8387.62 32.76 0.00 0.00 121493.82 20777.34 72623.60 00:09:42.634 [2024-11-17T17:30:29.210Z] =================================================================================================================== 00:09:42.634 [2024-11-17T17:30:29.210Z] Total : 8387.62 32.76 0.00 0.00 121493.82 20777.34 72623.60 00:09:42.634 { 00:09:42.634 "results": [ 00:09:42.634 { 00:09:42.634 "job": "NVMe0n1", 00:09:42.634 "core_mask": "0x1", 00:09:42.634 "workload": "verify", 00:09:42.634 "status": "finished", 00:09:42.634 "verify_range": { 00:09:42.634 "start": 0, 00:09:42.634 "length": 16384 00:09:42.634 }, 00:09:42.634 "queue_depth": 1024, 00:09:42.634 "io_size": 4096, 00:09:42.634 "runtime": 10.098334, 00:09:42.634 "iops": 8387.621166026, 00:09:42.634 "mibps": 32.76414517978906, 00:09:42.634 "io_failed": 0, 00:09:42.634 "io_timeout": 0, 00:09:42.634 "avg_latency_us": 121493.81505071215, 00:09:42.634 "min_latency_us": 20777.33925925926, 00:09:42.634 "max_latency_us": 72623.59703703703 00:09:42.634 } 00:09:42.634 ], 00:09:42.634 "core_count": 1 00:09:42.634 } 00:09:42.634 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 633089 00:09:42.634 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 633089 ']' 00:09:42.634 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 633089 00:09:42.634 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:42.634 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.634 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 633089 00:09:42.634 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.634 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.634 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 633089' 00:09:42.634 killing process with pid 633089 00:09:42.634 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 633089 00:09:42.634 Received shutdown signal, test time was about 10.000000 seconds 00:09:42.634 00:09:42.634 Latency(us) 00:09:42.634 [2024-11-17T17:30:29.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.634 [2024-11-17T17:30:29.210Z] =================================================================================================================== 00:09:42.634 [2024-11-17T17:30:29.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:42.634 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 633089 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.891 rmmod nvme_tcp 00:09:42.891 rmmod nvme_fabrics 00:09:42.891 rmmod nvme_keyring 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 633018 ']' 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 633018 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 633018 ']' 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 633018 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 633018 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 633018' 00:09:42.891 killing process with pid 633018 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 633018 00:09:42.891 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 633018 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.151 18:30:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.061 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:45.061 00:09:45.061 real 0m16.152s 00:09:45.061 user 0m21.508s 00:09:45.061 sys 0m3.621s 00:09:45.061 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.061 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.061 ************************************ 00:09:45.061 END TEST nvmf_queue_depth 00:09:45.061 ************************************ 00:09:45.061 18:30:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:45.061 18:30:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.061 18:30:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.061 18:30:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.321 ************************************ 00:09:45.321 START TEST nvmf_target_multipath 00:09:45.321 ************************************ 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:45.321 * Looking for test storage... 00:09:45.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:45.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.321 --rc genhtml_branch_coverage=1 00:09:45.321 --rc genhtml_function_coverage=1 00:09:45.321 --rc genhtml_legend=1 00:09:45.321 --rc geninfo_all_blocks=1 00:09:45.321 --rc geninfo_unexecuted_blocks=1 00:09:45.321 00:09:45.321 ' 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:45.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.321 --rc genhtml_branch_coverage=1 00:09:45.321 --rc genhtml_function_coverage=1 00:09:45.321 --rc genhtml_legend=1 00:09:45.321 --rc geninfo_all_blocks=1 00:09:45.321 --rc geninfo_unexecuted_blocks=1 00:09:45.321 00:09:45.321 ' 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:45.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.321 --rc genhtml_branch_coverage=1 00:09:45.321 --rc genhtml_function_coverage=1 00:09:45.321 --rc genhtml_legend=1 00:09:45.321 --rc geninfo_all_blocks=1 00:09:45.321 --rc geninfo_unexecuted_blocks=1 00:09:45.321 00:09:45.321 ' 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:45.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.321 --rc genhtml_branch_coverage=1 00:09:45.321 --rc genhtml_function_coverage=1 00:09:45.321 --rc genhtml_legend=1 00:09:45.321 --rc geninfo_all_blocks=1 00:09:45.321 --rc geninfo_unexecuted_blocks=1 00:09:45.321 00:09:45.321 ' 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.321 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.322 18:30:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.857 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.857 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:47.857 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:47.857 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:47.857 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:47.857 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:47.857 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:47.857 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:47.857 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:47.857 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:47.857 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:47.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:47.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:47.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:47.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:47.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:09:47.858 00:09:47.858 --- 10.0.0.2 ping statistics --- 00:09:47.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.858 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:09:47.858 00:09:47.858 --- 10.0.0.1 ping statistics --- 00:09:47.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.858 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.858 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:47.859 only one NIC for nvmf test 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.859 rmmod nvme_tcp 00:09:47.859 rmmod nvme_fabrics 00:09:47.859 rmmod nvme_keyring 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.859 18:30:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:49.770 00:09:49.770 real 0m4.686s 00:09:49.770 user 0m0.987s 00:09:49.770 sys 0m1.704s 00:09:49.770 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.771 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:49.771 ************************************ 00:09:49.771 END TEST nvmf_target_multipath 00:09:49.771 ************************************ 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.031 ************************************ 00:09:50.031 START TEST nvmf_zcopy 00:09:50.031 ************************************ 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:50.031 * Looking for test storage... 00:09:50.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.031 --rc genhtml_branch_coverage=1 00:09:50.031 --rc genhtml_function_coverage=1 00:09:50.031 --rc genhtml_legend=1 00:09:50.031 --rc geninfo_all_blocks=1 00:09:50.031 --rc geninfo_unexecuted_blocks=1 00:09:50.031 00:09:50.031 ' 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.031 --rc genhtml_branch_coverage=1 00:09:50.031 --rc genhtml_function_coverage=1 00:09:50.031 --rc genhtml_legend=1 00:09:50.031 --rc geninfo_all_blocks=1 00:09:50.031 --rc geninfo_unexecuted_blocks=1 00:09:50.031 00:09:50.031 ' 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.031 --rc genhtml_branch_coverage=1 00:09:50.031 --rc genhtml_function_coverage=1 00:09:50.031 --rc genhtml_legend=1 00:09:50.031 --rc geninfo_all_blocks=1 00:09:50.031 --rc geninfo_unexecuted_blocks=1 00:09:50.031 00:09:50.031 ' 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.031 --rc genhtml_branch_coverage=1 00:09:50.031 --rc genhtml_function_coverage=1 00:09:50.031 --rc genhtml_legend=1 00:09:50.031 --rc geninfo_all_blocks=1 00:09:50.031 --rc geninfo_unexecuted_blocks=1 00:09:50.031 00:09:50.031 ' 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:50.031 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.032 18:30:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:52.570 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:52.570 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:52.570 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:52.571 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:52.571 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:09:52.571 00:09:52.571 --- 10.0.0.2 ping statistics --- 00:09:52.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.571 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:09:52.571 00:09:52.571 --- 10.0.0.1 ping statistics --- 00:09:52.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.571 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=638279 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 638279 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 638279 ']' 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.571 18:30:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.571 [2024-11-17 18:30:38.935565] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:52.571 [2024-11-17 18:30:38.935645] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.571 [2024-11-17 18:30:39.025735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.571 [2024-11-17 18:30:39.080754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.571 [2024-11-17 18:30:39.080818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.571 [2024-11-17 18:30:39.080845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.571 [2024-11-17 18:30:39.080867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.571 [2024-11-17 18:30:39.080886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.571 [2024-11-17 18:30:39.081669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.837 [2024-11-17 18:30:39.300590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.837 [2024-11-17 18:30:39.316839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.837 malloc0 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:52.837 { 00:09:52.837 "params": { 00:09:52.837 "name": "Nvme$subsystem", 00:09:52.837 "trtype": "$TEST_TRANSPORT", 00:09:52.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.837 "adrfam": "ipv4", 00:09:52.837 "trsvcid": "$NVMF_PORT", 00:09:52.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.837 "hdgst": ${hdgst:-false}, 00:09:52.837 "ddgst": ${ddgst:-false} 00:09:52.837 }, 00:09:52.837 "method": "bdev_nvme_attach_controller" 00:09:52.837 } 00:09:52.837 EOF 00:09:52.837 )") 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:52.837 18:30:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:52.837 "params": { 00:09:52.837 "name": "Nvme1", 00:09:52.837 "trtype": "tcp", 00:09:52.837 "traddr": "10.0.0.2", 00:09:52.837 "adrfam": "ipv4", 00:09:52.837 "trsvcid": "4420", 00:09:52.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.837 "hdgst": false, 00:09:52.837 "ddgst": false 00:09:52.837 }, 00:09:52.837 "method": "bdev_nvme_attach_controller" 00:09:52.837 }' 00:09:52.837 [2024-11-17 18:30:39.401506] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:09:52.837 [2024-11-17 18:30:39.401573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638389 ] 00:09:53.096 [2024-11-17 18:30:39.471717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.096 [2024-11-17 18:30:39.518914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.354 Running I/O for 10 seconds... 00:09:55.222 5748.00 IOPS, 44.91 MiB/s [2024-11-17T17:30:43.172Z] 5813.50 IOPS, 45.42 MiB/s [2024-11-17T17:30:44.107Z] 5844.00 IOPS, 45.66 MiB/s [2024-11-17T17:30:45.133Z] 5848.75 IOPS, 45.69 MiB/s [2024-11-17T17:30:46.070Z] 5852.20 IOPS, 45.72 MiB/s [2024-11-17T17:30:47.008Z] 5865.00 IOPS, 45.82 MiB/s [2024-11-17T17:30:47.945Z] 5869.29 IOPS, 45.85 MiB/s [2024-11-17T17:30:48.880Z] 5868.12 IOPS, 45.84 MiB/s [2024-11-17T17:30:49.813Z] 5871.56 IOPS, 45.87 MiB/s [2024-11-17T17:30:49.813Z] 5870.80 IOPS, 45.87 MiB/s 00:10:03.237 Latency(us) 00:10:03.237 [2024-11-17T17:30:49.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.238 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:03.238 Verification LBA range: start 0x0 length 0x1000 00:10:03.238 Nvme1n1 : 10.02 5873.62 45.89 0.00 0.00 21734.00 3543.80 30874.74 00:10:03.238 [2024-11-17T17:30:49.814Z] =================================================================================================================== 00:10:03.238 [2024-11-17T17:30:49.814Z] Total : 5873.62 45.89 0.00 0.00 21734.00 3543.80 30874.74 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=639591 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.496 { 00:10:03.496 "params": { 00:10:03.496 "name": "Nvme$subsystem", 00:10:03.496 "trtype": "$TEST_TRANSPORT", 00:10:03.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.496 "adrfam": "ipv4", 00:10:03.496 "trsvcid": "$NVMF_PORT", 00:10:03.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.496 "hdgst": ${hdgst:-false}, 00:10:03.496 "ddgst": ${ddgst:-false} 00:10:03.496 }, 00:10:03.496 "method": "bdev_nvme_attach_controller" 00:10:03.496 } 00:10:03.496 EOF 00:10:03.496 )") 00:10:03.496 [2024-11-17 18:30:49.995248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.496 [2024-11-17 18:30:49.995291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:03.496 18:30:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.496 "params": { 00:10:03.496 "name": "Nvme1", 00:10:03.496 "trtype": "tcp", 00:10:03.496 "traddr": "10.0.0.2", 00:10:03.496 "adrfam": "ipv4", 00:10:03.496 "trsvcid": "4420", 00:10:03.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.496 "hdgst": false, 00:10:03.496 "ddgst": false 00:10:03.496 }, 00:10:03.496 "method": "bdev_nvme_attach_controller" 00:10:03.497 }' 00:10:03.497 [2024-11-17 18:30:50.003202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.497 [2024-11-17 18:30:50.003228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.497 [2024-11-17 18:30:50.011216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.497 [2024-11-17 18:30:50.011238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.497 [2024-11-17 18:30:50.019247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.497 [2024-11-17 18:30:50.019274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.497 [2024-11-17 18:30:50.027261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.497 [2024-11-17 18:30:50.027300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.497 [2024-11-17 18:30:50.035281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.497 [2024-11-17 18:30:50.035302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.497 [2024-11-17 18:30:50.041554] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:03.497 [2024-11-17 18:30:50.041629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639591 ] 00:10:03.497 [2024-11-17 18:30:50.043320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.497 [2024-11-17 18:30:50.043343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.497 [2024-11-17 18:30:50.051380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.497 [2024-11-17 18:30:50.051425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.497 [2024-11-17 18:30:50.059357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.497 [2024-11-17 18:30:50.059379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.497 [2024-11-17 18:30:50.067381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.497 [2024-11-17 18:30:50.067408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.075422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.075444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.083434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.083470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.091458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.091477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.099464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.099483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.107507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.107528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.114989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.756 [2024-11-17 18:30:50.115506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.115525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.123582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.123617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.131604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.131650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.139580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.139602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.147599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.147620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.155627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.155649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.163642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.163685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.166666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.756 [2024-11-17 18:30:50.171686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.171708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.179748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.179771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.187791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.187826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.195812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.195846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.203834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.203873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.211857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.211893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.219868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.219902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.227894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.227947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.235879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.235910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.243934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.243987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.251973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.252012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.259998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.260035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.267977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.267999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.275985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.276007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.284042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.284066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.292055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.292079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.300074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.300096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.308092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.308113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.316115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.316136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.756 [2024-11-17 18:30:50.324145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.756 [2024-11-17 18:30:50.324165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.332169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.332190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.340187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.340207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.348214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.348236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.356246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.356271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.364257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.364279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.372279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.372299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.380307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.380329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.388321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.388340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.396341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.396360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.404378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.404402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.412391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.412411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.420413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.420432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.428440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.428467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.436457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.436476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.444481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.444501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.452512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.452549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.460533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.460557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.468551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.468587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 Running I/O for 5 seconds... 00:10:04.014 [2024-11-17 18:30:50.476596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.476615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.491456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.491485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.502352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.014 [2024-11-17 18:30:50.502380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.014 [2024-11-17 18:30:50.515183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.015 [2024-11-17 18:30:50.515212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.015 [2024-11-17 18:30:50.524792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.015 [2024-11-17 18:30:50.524820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.015 [2024-11-17 18:30:50.535856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.015 [2024-11-17 18:30:50.535884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.015 [2024-11-17 18:30:50.548694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.015 [2024-11-17 18:30:50.548722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.015 [2024-11-17 18:30:50.558731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.015 [2024-11-17 18:30:50.558759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.015 [2024-11-17 18:30:50.569341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.015 [2024-11-17 18:30:50.569368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.015 [2024-11-17 18:30:50.582092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.015 [2024-11-17 18:30:50.582120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.592343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.592370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.602831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.602859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.613390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.613417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.623735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.623771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.634265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.634292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.644733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.644761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.655167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.655194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.665695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.665722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.676126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.676153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.686449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.686475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.696813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.696840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.707447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.707474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.718023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.718050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.728723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.728750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.739461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.739488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.750191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.750218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.760804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.760831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.771483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.771511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.782414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.782441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.795017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.795044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.805215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.805242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.816043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.816070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.829078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.829116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.273 [2024-11-17 18:30:50.839170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.273 [2024-11-17 18:30:50.839197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.849712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.849739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.860277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.860304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.870517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.870544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.881206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.881233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.893338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.893365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.902151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.902178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.913335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.913362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.923708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.923734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.934426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.934453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.946996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.947022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.957053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.957095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.967795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.967823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.978425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.978452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:50.989380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:50.989407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:51.001859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:51.001887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:51.011848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:51.011874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:51.022092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:51.022120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:51.032606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:51.032633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:51.043382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:51.043409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:51.053948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:51.053975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:51.064750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:51.064777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:51.075292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:51.075319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:51.085942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:51.085969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.532 [2024-11-17 18:30:51.096741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.532 [2024-11-17 18:30:51.096768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.109434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.109461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.119837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.119864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.130370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.130396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.140829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.140855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.151548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.151575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.161888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.161914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.172613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.172640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.186310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.186337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.199257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.199285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.209618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.209646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.220385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.220412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.233263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.233290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.243338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.243366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.253641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.253668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.264208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.264235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.275003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.275031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.287274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.287301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.296540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.296567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.310378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.310405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.321264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.321292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.331969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.331997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.342635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.342664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.353286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.353313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.791 [2024-11-17 18:30:51.363710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.791 [2024-11-17 18:30:51.363744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.374294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.374323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.385027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.385054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.395600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.395628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.407930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.407958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.417926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.417953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.428969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.428996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.439581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.439608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.450347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.450374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.463262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.463291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.472863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.472891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 11832.00 IOPS, 92.44 MiB/s [2024-11-17T17:30:51.626Z] [2024-11-17 18:30:51.484505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.484532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.497222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.497249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.506656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.506694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.518183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.518210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.531214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.531242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.541360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.541402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.552066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.552108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.562550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.562577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.572816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.572843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.582972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.582999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.593401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.593428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.603903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.050 [2024-11-17 18:30:51.603930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.050 [2024-11-17 18:30:51.617482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.051 [2024-11-17 18:30:51.617509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.627864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.627890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.638521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.638549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.651239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.651273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.662967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.662995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.672530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.672556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.683616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.683643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.695859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.695886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.705932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.705958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.716312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.716339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.726925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.726952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.737529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.737557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.750771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.750799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.762417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.762444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.771771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.771798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.783172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.783199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.795480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.795507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.805277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.805305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.815942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.815969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.826955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.826982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.839709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.839736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.851561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.851604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.860525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.860558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.872134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.872162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.309 [2024-11-17 18:30:51.882469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.309 [2024-11-17 18:30:51.882496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:51.893213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:51.893241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:51.905559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:51.905586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:51.915627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:51.915655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:51.926105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:51.926132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:51.936435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:51.936462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:51.946573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:51.946600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:51.957149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:51.957177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:51.967872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:51.967900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:51.978400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:51.978427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:51.991411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:51.991438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.001410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.001437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.012017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.012044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.022835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.022863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.033249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.033275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.044005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.044032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.054188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.054215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.064488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.064523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.074807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.074835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.085205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.085233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.095705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.095731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.108320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.108347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.118367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.118393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.129144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.129171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.568 [2024-11-17 18:30:52.142633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.568 [2024-11-17 18:30:52.142660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.153186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.153213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.163949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.163976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.174746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.174773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.185235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.185263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.195558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.195585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.205962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.205989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.216865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.216892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.227391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.227417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.237951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.237978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.248501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.248528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.261160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.261187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.271177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.271211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.281612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.281639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.292178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.292220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.302975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.303002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.313772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.313799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.326497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.326525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.338161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.338188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.347301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.347329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.358363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.358391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.371107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.371135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.381465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.381492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.827 [2024-11-17 18:30:52.391893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.827 [2024-11-17 18:30:52.391920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.402800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.402827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.415512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.415540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.426028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.426055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.436742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.436770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.449313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.449356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.459331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.459358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.470074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.470102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 11930.50 IOPS, 93.21 MiB/s [2024-11-17T17:30:52.662Z] [2024-11-17 18:30:52.481086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.481114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.492037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.492065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.504507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.504534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.514512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.514540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.525019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.525047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.535704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.535733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.546506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.546534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.558991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.559018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.568924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.568952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.579738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.579765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.590316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.590344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.601479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.601506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.614181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.614220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.624228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.624270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.635158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.635185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.645848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.645875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.086 [2024-11-17 18:30:52.656578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.086 [2024-11-17 18:30:52.656604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.344 [2024-11-17 18:30:52.669062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.344 [2024-11-17 18:30:52.669089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.344 [2024-11-17 18:30:52.679217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.344 [2024-11-17 18:30:52.679262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.344 [2024-11-17 18:30:52.689752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.344 [2024-11-17 18:30:52.689779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.344 [2024-11-17 18:30:52.700354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.344 [2024-11-17 18:30:52.700381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.344 [2024-11-17 18:30:52.713226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.344 [2024-11-17 18:30:52.713253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.344 [2024-11-17 18:30:52.723208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.344 [2024-11-17 18:30:52.723235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.344 [2024-11-17 18:30:52.733539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.344 [2024-11-17 18:30:52.733565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.344 [2024-11-17 18:30:52.743768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.344 [2024-11-17 18:30:52.743795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.754628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.754655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.766902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.766930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.775801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.775829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.787052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.787079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.797648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.797683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.808223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.808251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.819024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.819050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.830108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.830135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.843006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.843033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.853368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.853395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.863957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.863985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.874470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.874497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.885047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.885082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.895992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.896019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.906534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.906561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.345 [2024-11-17 18:30:52.916913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.345 [2024-11-17 18:30:52.916940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-11-17 18:30:52.927573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-11-17 18:30:52.927600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-11-17 18:30:52.939912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-11-17 18:30:52.939940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-11-17 18:30:52.949913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-11-17 18:30:52.949940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-11-17 18:30:52.960485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-11-17 18:30:52.960512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-11-17 18:30:52.971353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-11-17 18:30:52.971381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-11-17 18:30:52.984211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-11-17 18:30:52.984238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-11-17 18:30:52.994587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:52.994613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.005369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.005396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.018882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.018910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.029241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.029268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.040193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.040221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.052732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.052759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.063131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.063158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.073413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.073441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.084112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.084139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.094794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.094828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.105304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.105331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.116076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.116103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.126587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.126614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.136794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.136821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.147296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.147323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.157787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.157814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.168194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.168220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.604 [2024-11-17 18:30:53.178470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.604 [2024-11-17 18:30:53.178498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.189029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.189056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.199488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.199515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.210026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.210053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.220607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.220635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.232924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.232951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.242009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.242035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.255269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.255296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.265693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.265720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.276213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.276240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.288159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.288186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.297789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.297824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.308665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.308702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.321203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.321230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.331171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.331198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.341600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.341628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.352210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.352237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.362995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.363022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.373785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.373813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.384488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.384515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.397015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.397041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.407097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.862 [2024-11-17 18:30:53.407124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.862 [2024-11-17 18:30:53.417923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-11-17 18:30:53.417950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-11-17 18:30:53.430320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-11-17 18:30:53.430347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.120 [2024-11-17 18:30:53.440228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.120 [2024-11-17 18:30:53.440255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.450967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.450994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.463042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.463069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.472482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.472509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 11941.00 IOPS, 93.29 MiB/s [2024-11-17T17:30:53.697Z] [2024-11-17 18:30:53.483167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.483194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.493790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.493816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.504170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.504212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.514612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.514639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.525306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.525333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.538059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.538086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.548028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.548055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.558357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.558384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.568754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.568781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.579046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.579073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.589807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.589834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.599793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.599821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.610205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.610231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.620809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.620835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.631354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.631382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.641841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.641869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.652527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.652555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.662949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.662976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.673436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.673464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.121 [2024-11-17 18:30:53.684217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.121 [2024-11-17 18:30:53.684244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.697106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.697133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.707162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.707191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.717424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.717452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.728215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.728243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.738896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.738923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.749393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.749421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.760185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.760212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.770864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.770891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.784046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.784073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.794320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.794347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.804749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.804776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.815398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.815425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.826347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.826374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.838863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.838890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.848889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.848916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.859461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.859488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.870020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.870047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.880458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.880485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.890986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.891013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.901715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.901741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.912001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.912029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.923195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.923222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.936086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.936114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.379 [2024-11-17 18:30:53.946493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.379 [2024-11-17 18:30:53.946520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:53.957180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:53.957207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:53.967766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:53.967793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:53.978572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:53.978599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:53.991238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:53.991265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.001754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.001780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.012451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.012478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.025141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.025184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.035471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.035499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.046083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.046110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.056897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.056924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.068225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.068252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.078944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.078972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.089901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.089928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.100774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.100801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.111468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.111495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.122421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.122448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.133113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.133140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.145608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.145635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.155590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.155616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.166432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.637 [2024-11-17 18:30:54.166459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.637 [2024-11-17 18:30:54.177361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.638 [2024-11-17 18:30:54.177388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.638 [2024-11-17 18:30:54.189949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.638 [2024-11-17 18:30:54.189976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.638 [2024-11-17 18:30:54.199519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.638 [2024-11-17 18:30:54.199546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.638 [2024-11-17 18:30:54.210281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.638 [2024-11-17 18:30:54.210309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.221354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.221382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.233833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.233860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.243941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.243968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.254416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.254443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.264918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.264945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.275735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.275763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.286454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.286481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.299198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.299225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.309308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.309336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.320286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.320320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.333070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.333097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.343198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.343225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.353901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.353928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.366319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.366346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.376201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.376228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.387074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.387102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.397303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.397331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.408277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.408303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.421046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.421073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.430498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.430525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.441866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.441893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.452785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.452812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.896 [2024-11-17 18:30:54.463644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.896 [2024-11-17 18:30:54.463671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.154 [2024-11-17 18:30:54.476864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.154 [2024-11-17 18:30:54.476891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.154 11932.75 IOPS, 93.22 MiB/s [2024-11-17T17:30:54.730Z] [2024-11-17 18:30:54.487237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.154 [2024-11-17 18:30:54.487264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.497801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.497828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.510064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.510091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.519969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.519996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.530553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.530588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.541394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.541421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.552174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.552201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.562647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.562681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.573153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.573180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.583774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.583801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.594518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.594545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.605025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.605052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.615719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.615746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.626317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.626344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.637209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.637237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.647859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.647886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.661372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.661400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.671426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.671455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.682255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.682283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.694738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.694766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.705089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.705115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.715572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.715599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.155 [2024-11-17 18:30:54.725924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.155 [2024-11-17 18:30:54.725951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.736237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.736271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.746559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.746587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.757470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.757498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.770067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.770094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.781855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.781883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.790474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.790503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.801747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.801774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.814071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.814098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.823983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.824011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.834356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.834383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.845254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.845281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.857658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.857704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.868120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.868148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.878758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.878785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.891235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.891263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.901479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.901506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.912390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.912417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.925254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.925282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.936974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.937001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.946262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.946289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.956624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.956651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.968836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.968864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.413 [2024-11-17 18:30:54.980359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.413 [2024-11-17 18:30:54.980386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:54.989179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:54.989206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.000566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.000609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.012884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.012911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.022702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.022730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.032822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.032849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.043579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.043606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.054177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.054204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.064898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.064925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.077534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.077576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.087471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.087499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.097940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.097967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.108530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.108557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.119152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.119195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.129566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.129593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.140000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.140027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.150645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.150672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.161533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.161560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.177856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.177895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.188003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.671 [2024-11-17 18:30:55.188030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.671 [2024-11-17 18:30:55.198864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.672 [2024-11-17 18:30:55.198891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.672 [2024-11-17 18:30:55.211168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.672 [2024-11-17 18:30:55.211194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.672 [2024-11-17 18:30:55.221099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.672 [2024-11-17 18:30:55.221142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.672 [2024-11-17 18:30:55.231669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.672 [2024-11-17 18:30:55.231703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.672 [2024-11-17 18:30:55.242262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.672 [2024-11-17 18:30:55.242304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.252948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.252975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.263423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.263449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.274044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.274086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.284601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.284629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.295076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.295103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.305774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.305802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.318661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.318697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.329101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.329128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.339411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.339438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.350553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.350580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.363350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.363377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.373804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.373831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.384781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.384808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.397430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.397459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.407751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.407777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.418391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.418418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.429132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.429158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.439542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.439569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.450386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.450413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.460966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.460992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.471108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.471135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 [2024-11-17 18:30:55.482061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.482088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 11937.80 IOPS, 93.26 MiB/s [2024-11-17T17:30:55.506Z] [2024-11-17 18:30:55.491983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.492009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.930 00:10:08.930 Latency(us) 00:10:08.930 [2024-11-17T17:30:55.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.930 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:08.930 Nvme1n1 : 5.01 11944.13 93.31 0.00 0.00 10703.25 4538.97 25049.32 00:10:08.930 [2024-11-17T17:30:55.506Z] =================================================================================================================== 00:10:08.930 [2024-11-17T17:30:55.506Z] Total : 11944.13 93.31 0.00 0.00 10703.25 4538.97 25049.32 00:10:08.930 [2024-11-17 18:30:55.497949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.930 [2024-11-17 18:30:55.497973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.189 [2024-11-17 18:30:55.505992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.189 [2024-11-17 18:30:55.506019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.514036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.514085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.522084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.522134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.530105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.530154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.538116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.538164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.546145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.546194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.554170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.554220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.562192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.562240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.570202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.570250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.578234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.578283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.586257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.586308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.594280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.594330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.602308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.602360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.610319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.610368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.618332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.618380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.626361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.626409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.634383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.634423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.642338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.642361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.650383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.650416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.658451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.658500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.666465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.666525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.674420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.674440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.682436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.682455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 [2024-11-17 18:30:55.690457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.190 [2024-11-17 18:30:55.690476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (639591) - No such process 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 639591 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.190 delay0 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.190 18:30:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:09.448 [2024-11-17 18:30:55.771298] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:16.011 Initializing NVMe Controllers 00:10:16.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:16.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:16.011 Initialization complete. Launching workers. 00:10:16.011 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 84 00:10:16.011 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 371, failed to submit 33 00:10:16.011 success 190, unsuccessful 181, failed 0 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.011 rmmod nvme_tcp 00:10:16.011 rmmod nvme_fabrics 00:10:16.011 rmmod nvme_keyring 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 638279 ']' 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 638279 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 638279 ']' 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 638279 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 638279 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 638279' 00:10:16.011 killing process with pid 638279 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 638279 00:10:16.011 18:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 638279 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.011 18:31:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.930 00:10:17.930 real 0m27.846s 00:10:17.930 user 0m41.184s 00:10:17.930 sys 0m8.155s 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.930 ************************************ 00:10:17.930 END TEST nvmf_zcopy 00:10:17.930 ************************************ 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.930 ************************************ 00:10:17.930 START TEST nvmf_nmic 00:10:17.930 ************************************ 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:17.930 * Looking for test storage... 00:10:17.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.930 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.931 --rc genhtml_branch_coverage=1 00:10:17.931 --rc genhtml_function_coverage=1 00:10:17.931 --rc genhtml_legend=1 00:10:17.931 --rc geninfo_all_blocks=1 00:10:17.931 --rc geninfo_unexecuted_blocks=1 00:10:17.931 00:10:17.931 ' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.931 --rc genhtml_branch_coverage=1 00:10:17.931 --rc genhtml_function_coverage=1 00:10:17.931 --rc genhtml_legend=1 00:10:17.931 --rc geninfo_all_blocks=1 00:10:17.931 --rc geninfo_unexecuted_blocks=1 00:10:17.931 00:10:17.931 ' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.931 --rc genhtml_branch_coverage=1 00:10:17.931 --rc genhtml_function_coverage=1 00:10:17.931 --rc genhtml_legend=1 00:10:17.931 --rc geninfo_all_blocks=1 00:10:17.931 --rc geninfo_unexecuted_blocks=1 00:10:17.931 00:10:17.931 ' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.931 --rc genhtml_branch_coverage=1 00:10:17.931 --rc genhtml_function_coverage=1 00:10:17.931 --rc genhtml_legend=1 00:10:17.931 --rc geninfo_all_blocks=1 00:10:17.931 --rc geninfo_unexecuted_blocks=1 00:10:17.931 00:10:17.931 ' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.931 18:31:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.465 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:20.466 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:20.466 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:20.466 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:20.466 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:10:20.466 00:10:20.466 --- 10.0.0.2 ping statistics --- 00:10:20.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.466 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:10:20.466 00:10:20.466 --- 10.0.0.1 ping statistics --- 00:10:20.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.466 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=642990 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 642990 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 642990 ']' 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.466 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.467 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.467 18:31:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.467 [2024-11-17 18:31:06.859378] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:20.467 [2024-11-17 18:31:06.859475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.467 [2024-11-17 18:31:06.930209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.467 [2024-11-17 18:31:06.976031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.467 [2024-11-17 18:31:06.976086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.467 [2024-11-17 18:31:06.976114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.467 [2024-11-17 18:31:06.976125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.467 [2024-11-17 18:31:06.976135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.467 [2024-11-17 18:31:06.977728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.467 [2024-11-17 18:31:06.977758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.467 [2024-11-17 18:31:06.977785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.467 [2024-11-17 18:31:06.977788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.725 [2024-11-17 18:31:07.124643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.725 Malloc0 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.725 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.726 [2024-11-17 18:31:07.196033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:20.726 test case1: single bdev can't be used in multiple subsystems 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.726 [2024-11-17 18:31:07.219848] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:20.726 [2024-11-17 18:31:07.219879] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:20.726 [2024-11-17 18:31:07.219895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.726 request: 00:10:20.726 { 00:10:20.726 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:20.726 "namespace": { 00:10:20.726 "bdev_name": "Malloc0", 00:10:20.726 "no_auto_visible": false 00:10:20.726 }, 00:10:20.726 "method": "nvmf_subsystem_add_ns", 00:10:20.726 "req_id": 1 00:10:20.726 } 00:10:20.726 Got JSON-RPC error response 00:10:20.726 response: 00:10:20.726 { 00:10:20.726 "code": -32602, 00:10:20.726 "message": "Invalid parameters" 00:10:20.726 } 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:20.726 Adding namespace failed - expected result. 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:20.726 test case2: host connect to nvmf target in multiple paths 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.726 [2024-11-17 18:31:07.227991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.726 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:21.659 18:31:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:22.225 18:31:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.225 18:31:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:22.225 18:31:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.225 18:31:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:22.225 18:31:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:24.137 18:31:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:24.137 18:31:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:24.137 18:31:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.137 18:31:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:24.137 18:31:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.137 18:31:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:24.137 18:31:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:24.137 [global] 00:10:24.137 thread=1 00:10:24.137 invalidate=1 00:10:24.137 rw=write 00:10:24.137 time_based=1 00:10:24.137 runtime=1 00:10:24.137 ioengine=libaio 00:10:24.137 direct=1 00:10:24.137 bs=4096 00:10:24.137 iodepth=1 00:10:24.137 norandommap=0 00:10:24.137 numjobs=1 00:10:24.137 00:10:24.137 verify_dump=1 00:10:24.137 verify_backlog=512 00:10:24.137 verify_state_save=0 00:10:24.137 do_verify=1 00:10:24.137 verify=crc32c-intel 00:10:24.137 [job0] 00:10:24.137 filename=/dev/nvme0n1 00:10:24.137 Could not set queue depth (nvme0n1) 00:10:24.395 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.395 fio-3.35 00:10:24.395 Starting 1 thread 00:10:25.768 00:10:25.768 job0: (groupid=0, jobs=1): err= 0: pid=643522: Sun Nov 17 18:31:11 2024 00:10:25.768 read: IOPS=23, BW=92.9KiB/s (95.2kB/s)(96.0KiB/1033msec) 00:10:25.768 slat (nsec): min=8518, max=35089, avg=26804.42, stdev=9608.93 00:10:25.768 clat (usec): min=347, max=41155, avg=39261.01, stdev=8288.96 00:10:25.768 lat (usec): min=381, max=41164, avg=39287.82, stdev=8287.44 00:10:25.768 clat percentiles (usec): 00:10:25.768 | 1.00th=[ 347], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:25.768 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:25.768 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:25.768 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:25.768 | 99.99th=[41157] 00:10:25.768 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:10:25.768 slat (nsec): min=7347, max=44225, avg=12295.36, stdev=5815.05 00:10:25.768 clat (usec): min=124, max=242, avg=158.52, stdev=17.89 00:10:25.768 lat (usec): min=132, max=273, avg=170.81, stdev=21.01 00:10:25.768 clat percentiles (usec): 00:10:25.768 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:10:25.768 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 159], 00:10:25.768 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 192], 00:10:25.768 | 99.00th=[ 210], 99.50th=[ 212], 99.90th=[ 243], 99.95th=[ 243], 00:10:25.768 | 99.99th=[ 243] 00:10:25.768 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:25.768 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:25.768 lat (usec) : 250=95.52%, 500=0.19% 00:10:25.768 lat (msec) : 50=4.29% 00:10:25.768 cpu : usr=0.58%, sys=0.68%, ctx=536, majf=0, minf=1 00:10:25.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.768 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.768 00:10:25.768 Run status group 0 (all jobs): 00:10:25.768 READ: bw=92.9KiB/s (95.2kB/s), 92.9KiB/s-92.9KiB/s (95.2kB/s-95.2kB/s), io=96.0KiB (98.3kB), run=1033-1033msec 00:10:25.768 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:10:25.768 00:10:25.768 Disk stats (read/write): 00:10:25.768 nvme0n1: ios=70/512, merge=0/0, ticks=1011/77, in_queue=1088, util=95.49% 00:10:25.768 18:31:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:25.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:25.768 rmmod nvme_tcp 00:10:25.768 rmmod nvme_fabrics 00:10:25.768 rmmod nvme_keyring 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 642990 ']' 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 642990 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 642990 ']' 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 642990 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 642990 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 642990' 00:10:25.768 killing process with pid 642990 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 642990 00:10:25.768 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 642990 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.027 18:31:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.936 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:27.936 00:10:27.936 real 0m10.143s 00:10:27.936 user 0m22.628s 00:10:27.936 sys 0m2.534s 00:10:27.936 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.936 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.936 ************************************ 00:10:27.936 END TEST nvmf_nmic 00:10:27.936 ************************************ 00:10:27.936 18:31:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:27.936 18:31:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.936 18:31:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.936 18:31:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:27.936 ************************************ 00:10:27.936 START TEST nvmf_fio_target 00:10:27.936 ************************************ 00:10:27.936 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:28.196 * Looking for test storage... 00:10:28.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.196 --rc genhtml_branch_coverage=1 00:10:28.196 --rc genhtml_function_coverage=1 00:10:28.196 --rc genhtml_legend=1 00:10:28.196 --rc geninfo_all_blocks=1 00:10:28.196 --rc geninfo_unexecuted_blocks=1 00:10:28.196 00:10:28.196 ' 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.196 --rc genhtml_branch_coverage=1 00:10:28.196 --rc genhtml_function_coverage=1 00:10:28.196 --rc genhtml_legend=1 00:10:28.196 --rc geninfo_all_blocks=1 00:10:28.196 --rc geninfo_unexecuted_blocks=1 00:10:28.196 00:10:28.196 ' 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.196 --rc genhtml_branch_coverage=1 00:10:28.196 --rc genhtml_function_coverage=1 00:10:28.196 --rc genhtml_legend=1 00:10:28.196 --rc geninfo_all_blocks=1 00:10:28.196 --rc geninfo_unexecuted_blocks=1 00:10:28.196 00:10:28.196 ' 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:28.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.196 --rc genhtml_branch_coverage=1 00:10:28.196 --rc genhtml_function_coverage=1 00:10:28.196 --rc genhtml_legend=1 00:10:28.196 --rc geninfo_all_blocks=1 00:10:28.196 --rc geninfo_unexecuted_blocks=1 00:10:28.196 00:10:28.196 ' 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.196 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.197 18:31:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:30.730 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:30.730 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:30.730 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:30.730 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.730 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:10:30.731 00:10:30.731 --- 10.0.0.2 ping statistics --- 00:10:30.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.731 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:10:30.731 00:10:30.731 --- 10.0.0.1 ping statistics --- 00:10:30.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.731 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=645713 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 645713 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 645713 ']' 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.731 18:31:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.731 [2024-11-17 18:31:16.930124] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:30.731 [2024-11-17 18:31:16.930217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.731 [2024-11-17 18:31:17.012532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.731 [2024-11-17 18:31:17.061821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.731 [2024-11-17 18:31:17.061874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.731 [2024-11-17 18:31:17.061888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.731 [2024-11-17 18:31:17.061899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.731 [2024-11-17 18:31:17.061909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.731 [2024-11-17 18:31:17.063500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.731 [2024-11-17 18:31:17.063581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.731 [2024-11-17 18:31:17.063584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.731 [2024-11-17 18:31:17.063523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.731 18:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.731 18:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:30.731 18:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.731 18:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.731 18:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.731 18:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.731 18:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:30.989 [2024-11-17 18:31:17.514724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.989 18:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.555 18:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:31.555 18:31:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:31.813 18:31:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:31.813 18:31:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.071 18:31:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:32.071 18:31:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.329 18:31:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:32.329 18:31:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:32.586 18:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.843 18:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:32.843 18:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.100 18:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:33.100 18:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.358 18:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:33.358 18:31:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:33.616 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:33.874 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:33.874 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:34.162 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:34.162 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:34.444 18:31:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.701 [2024-11-17 18:31:21.204604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.701 18:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:34.959 18:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:35.216 18:31:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:36.150 18:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:36.150 18:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:36.150 18:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.150 18:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:36.150 18:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:36.150 18:31:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:38.047 18:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:38.047 18:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:38.047 18:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:38.047 18:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:38.048 18:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:38.048 18:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:38.048 18:31:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:38.048 [global] 00:10:38.048 thread=1 00:10:38.048 invalidate=1 00:10:38.048 rw=write 00:10:38.048 time_based=1 00:10:38.048 runtime=1 00:10:38.048 ioengine=libaio 00:10:38.048 direct=1 00:10:38.048 bs=4096 00:10:38.048 iodepth=1 00:10:38.048 norandommap=0 00:10:38.048 numjobs=1 00:10:38.048 00:10:38.048 verify_dump=1 00:10:38.048 verify_backlog=512 00:10:38.048 verify_state_save=0 00:10:38.048 do_verify=1 00:10:38.048 verify=crc32c-intel 00:10:38.048 [job0] 00:10:38.048 filename=/dev/nvme0n1 00:10:38.048 [job1] 00:10:38.048 filename=/dev/nvme0n2 00:10:38.048 [job2] 00:10:38.048 filename=/dev/nvme0n3 00:10:38.048 [job3] 00:10:38.048 filename=/dev/nvme0n4 00:10:38.048 Could not set queue depth (nvme0n1) 00:10:38.048 Could not set queue depth (nvme0n2) 00:10:38.048 Could not set queue depth (nvme0n3) 00:10:38.048 Could not set queue depth (nvme0n4) 00:10:38.306 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.306 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.306 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.306 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.306 fio-3.35 00:10:38.306 Starting 4 threads 00:10:39.678 00:10:39.678 job0: (groupid=0, jobs=1): err= 0: pid=646790: Sun Nov 17 18:31:25 2024 00:10:39.678 read: IOPS=1026, BW=4107KiB/s (4205kB/s)(4152KiB/1011msec) 00:10:39.678 slat (nsec): min=4451, max=62009, avg=20199.45, stdev=11210.12 00:10:39.678 clat (usec): min=198, max=41993, avg=616.93, stdev=3301.16 00:10:39.678 lat (usec): min=211, max=42008, avg=637.13, stdev=3300.82 00:10:39.678 clat percentiles (usec): 00:10:39.678 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 249], 20.00th=[ 289], 00:10:39.678 | 30.00th=[ 322], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 371], 00:10:39.678 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 412], 95.00th=[ 453], 00:10:39.678 | 99.00th=[ 545], 99.50th=[40633], 99.90th=[41157], 99.95th=[42206], 00:10:39.678 | 99.99th=[42206] 00:10:39.678 write: IOPS=1519, BW=6077KiB/s (6223kB/s)(6144KiB/1011msec); 0 zone resets 00:10:39.678 slat (nsec): min=7021, max=64181, avg=15567.71, stdev=7617.69 00:10:39.678 clat (usec): min=127, max=2275, avg=204.27, stdev=81.51 00:10:39.678 lat (usec): min=137, max=2287, avg=219.83, stdev=82.20 00:10:39.678 clat percentiles (usec): 00:10:39.678 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 159], 00:10:39.678 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 188], 60.00th=[ 202], 00:10:39.678 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 269], 95.00th=[ 302], 00:10:39.678 | 99.00th=[ 383], 99.50th=[ 404], 99.90th=[ 1237], 99.95th=[ 2278], 00:10:39.678 | 99.99th=[ 2278] 00:10:39.678 bw ( KiB/s): min= 4096, max= 8175, per=30.82%, avg=6135.50, stdev=2884.29, samples=2 00:10:39.678 iops : min= 1024, max= 2043, avg=1533.50, stdev=720.54, samples=2 00:10:39.678 lat (usec) : 250=54.43%, 500=44.44%, 750=0.74%, 1000=0.04% 00:10:39.678 lat (msec) : 2=0.04%, 4=0.04%, 50=0.27% 00:10:39.678 cpu : usr=2.67%, sys=4.16%, ctx=2575, majf=0, minf=1 00:10:39.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.678 issued rwts: total=1038,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.678 job1: (groupid=0, jobs=1): err= 0: pid=646791: Sun Nov 17 18:31:25 2024 00:10:39.678 read: IOPS=769, BW=3077KiB/s (3151kB/s)(3080KiB/1001msec) 00:10:39.678 slat (nsec): min=4886, max=65241, avg=16158.56, stdev=9596.92 00:10:39.678 clat (usec): min=188, max=41001, avg=990.91, stdev=5061.94 00:10:39.678 lat (usec): min=196, max=41016, avg=1007.07, stdev=5062.44 00:10:39.678 clat percentiles (usec): 00:10:39.678 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 258], 00:10:39.678 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 343], 00:10:39.678 | 70.00th=[ 363], 80.00th=[ 400], 90.00th=[ 457], 95.00th=[ 490], 00:10:39.678 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:39.678 | 99.99th=[41157] 00:10:39.678 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:39.678 slat (nsec): min=6160, max=63879, avg=12963.38, stdev=6717.21 00:10:39.678 clat (usec): min=126, max=407, avg=199.55, stdev=46.33 00:10:39.678 lat (usec): min=133, max=421, avg=212.52, stdev=47.87 00:10:39.678 clat percentiles (usec): 00:10:39.678 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 149], 00:10:39.678 | 30.00th=[ 163], 40.00th=[ 184], 50.00th=[ 208], 60.00th=[ 219], 00:10:39.678 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 260], 95.00th=[ 273], 00:10:39.678 | 99.00th=[ 318], 99.50th=[ 322], 99.90th=[ 355], 99.95th=[ 408], 00:10:39.678 | 99.99th=[ 408] 00:10:39.678 bw ( KiB/s): min= 4087, max= 4087, per=20.53%, avg=4087.00, stdev= 0.00, samples=1 00:10:39.678 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:39.678 lat (usec) : 250=58.14%, 500=39.91%, 750=1.11% 00:10:39.678 lat (msec) : 2=0.06%, 10=0.06%, 20=0.06%, 50=0.67% 00:10:39.678 cpu : usr=1.40%, sys=2.70%, ctx=1794, majf=0, minf=1 00:10:39.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.678 issued rwts: total=770,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.678 job2: (groupid=0, jobs=1): err= 0: pid=646793: Sun Nov 17 18:31:25 2024 00:10:39.678 read: IOPS=1560, BW=6242KiB/s (6392kB/s)(6248KiB/1001msec) 00:10:39.678 slat (nsec): min=5043, max=28579, avg=11162.66, stdev=4770.18 00:10:39.678 clat (usec): min=190, max=677, avg=342.07, stdev=67.14 00:10:39.678 lat (usec): min=196, max=690, avg=353.23, stdev=68.64 00:10:39.678 clat percentiles (usec): 00:10:39.678 | 1.00th=[ 200], 5.00th=[ 219], 10.00th=[ 255], 20.00th=[ 306], 00:10:39.678 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:10:39.678 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 420], 95.00th=[ 453], 00:10:39.678 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 676], 99.95th=[ 676], 00:10:39.678 | 99.99th=[ 676] 00:10:39.678 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:39.678 slat (nsec): min=7036, max=64059, avg=10001.46, stdev=3409.25 00:10:39.678 clat (usec): min=139, max=1128, avg=204.01, stdev=57.35 00:10:39.678 lat (usec): min=147, max=1142, avg=214.01, stdev=58.19 00:10:39.678 clat percentiles (usec): 00:10:39.678 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:10:39.678 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 210], 00:10:39.678 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 251], 95.00th=[ 310], 00:10:39.678 | 99.00th=[ 383], 99.50th=[ 408], 99.90th=[ 832], 99.95th=[ 955], 00:10:39.678 | 99.99th=[ 1123] 00:10:39.678 bw ( KiB/s): min= 8175, max= 8175, per=41.07%, avg=8175.00, stdev= 0.00, samples=1 00:10:39.678 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:39.678 lat (usec) : 250=55.10%, 500=43.80%, 750=1.00%, 1000=0.08% 00:10:39.678 lat (msec) : 2=0.03% 00:10:39.678 cpu : usr=1.50%, sys=4.40%, ctx=3611, majf=0, minf=1 00:10:39.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.678 issued rwts: total=1562,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.679 job3: (groupid=0, jobs=1): err= 0: pid=646794: Sun Nov 17 18:31:25 2024 00:10:39.679 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:10:39.679 slat (nsec): min=14095, max=36155, avg=23147.32, stdev=8963.80 00:10:39.679 clat (usec): min=40718, max=41057, avg=40958.80, stdev=66.92 00:10:39.679 lat (usec): min=40734, max=41074, avg=40981.94, stdev=66.37 00:10:39.679 clat percentiles (usec): 00:10:39.679 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:39.679 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:39.679 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:39.679 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:39.679 | 99.99th=[41157] 00:10:39.679 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:39.679 slat (nsec): min=7786, max=55985, avg=17895.63, stdev=7903.67 00:10:39.679 clat (usec): min=153, max=463, avg=226.51, stdev=36.67 00:10:39.679 lat (usec): min=170, max=480, avg=244.41, stdev=36.44 00:10:39.679 clat percentiles (usec): 00:10:39.679 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 190], 20.00th=[ 204], 00:10:39.679 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:10:39.679 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 297], 00:10:39.679 | 99.00th=[ 363], 99.50th=[ 400], 99.90th=[ 465], 99.95th=[ 465], 00:10:39.679 | 99.99th=[ 465] 00:10:39.679 bw ( KiB/s): min= 4096, max= 4096, per=20.58%, avg=4096.00, stdev= 0.00, samples=1 00:10:39.679 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:39.679 lat (usec) : 250=81.09%, 500=14.79% 00:10:39.679 lat (msec) : 50=4.12% 00:10:39.679 cpu : usr=0.39%, sys=0.88%, ctx=536, majf=0, minf=1 00:10:39.679 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.679 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.679 00:10:39.679 Run status group 0 (all jobs): 00:10:39.679 READ: bw=12.9MiB/s (13.5MB/s), 85.5KiB/s-6242KiB/s (87.6kB/s-6392kB/s), io=13.2MiB (13.9MB), run=1001-1029msec 00:10:39.679 WRITE: bw=19.4MiB/s (20.4MB/s), 1990KiB/s-8184KiB/s (2038kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1029msec 00:10:39.679 00:10:39.679 Disk stats (read/write): 00:10:39.679 nvme0n1: ios=1050/1536, merge=0/0, ticks=1314/311, in_queue=1625, util=85.27% 00:10:39.679 nvme0n2: ios=562/585, merge=0/0, ticks=751/131, in_queue=882, util=90.34% 00:10:39.679 nvme0n3: ios=1414/1536, merge=0/0, ticks=1206/330, in_queue=1536, util=93.40% 00:10:39.679 nvme0n4: ios=74/512, merge=0/0, ticks=894/113, in_queue=1007, util=94.19% 00:10:39.679 18:31:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:39.679 [global] 00:10:39.679 thread=1 00:10:39.679 invalidate=1 00:10:39.679 rw=randwrite 00:10:39.679 time_based=1 00:10:39.679 runtime=1 00:10:39.679 ioengine=libaio 00:10:39.679 direct=1 00:10:39.679 bs=4096 00:10:39.679 iodepth=1 00:10:39.679 norandommap=0 00:10:39.679 numjobs=1 00:10:39.679 00:10:39.679 verify_dump=1 00:10:39.679 verify_backlog=512 00:10:39.679 verify_state_save=0 00:10:39.679 do_verify=1 00:10:39.679 verify=crc32c-intel 00:10:39.679 [job0] 00:10:39.679 filename=/dev/nvme0n1 00:10:39.679 [job1] 00:10:39.679 filename=/dev/nvme0n2 00:10:39.679 [job2] 00:10:39.679 filename=/dev/nvme0n3 00:10:39.679 [job3] 00:10:39.679 filename=/dev/nvme0n4 00:10:39.679 Could not set queue depth (nvme0n1) 00:10:39.679 Could not set queue depth (nvme0n2) 00:10:39.679 Could not set queue depth (nvme0n3) 00:10:39.679 Could not set queue depth (nvme0n4) 00:10:39.679 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.679 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.679 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.679 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.679 fio-3.35 00:10:39.679 Starting 4 threads 00:10:41.052 00:10:41.052 job0: (groupid=0, jobs=1): err= 0: pid=647026: Sun Nov 17 18:31:27 2024 00:10:41.052 read: IOPS=1869, BW=7478KiB/s (7657kB/s)(7560KiB/1011msec) 00:10:41.052 slat (nsec): min=5031, max=67976, avg=11780.31, stdev=8219.67 00:10:41.052 clat (usec): min=193, max=42105, avg=319.29, stdev=1642.58 00:10:41.052 lat (usec): min=199, max=42154, avg=331.07, stdev=1643.86 00:10:41.052 clat percentiles (usec): 00:10:41.052 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 215], 00:10:41.052 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:10:41.052 | 70.00th=[ 249], 80.00th=[ 277], 90.00th=[ 330], 95.00th=[ 429], 00:10:41.052 | 99.00th=[ 494], 99.50th=[ 519], 99.90th=[41681], 99.95th=[42206], 00:10:41.052 | 99.99th=[42206] 00:10:41.052 write: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec); 0 zone resets 00:10:41.053 slat (nsec): min=6426, max=34014, avg=10685.70, stdev=4422.60 00:10:41.053 clat (usec): min=138, max=441, avg=171.17, stdev=20.14 00:10:41.053 lat (usec): min=146, max=449, avg=181.85, stdev=22.02 00:10:41.053 clat percentiles (usec): 00:10:41.053 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:10:41.053 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:10:41.053 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 204], 00:10:41.053 | 99.00th=[ 223], 99.50th=[ 233], 99.90th=[ 359], 99.95th=[ 363], 00:10:41.053 | 99.99th=[ 441] 00:10:41.053 bw ( KiB/s): min= 5288, max=11096, per=32.00%, avg=8192.00, stdev=4106.88, samples=2 00:10:41.053 iops : min= 1322, max= 2774, avg=2048.00, stdev=1026.72, samples=2 00:10:41.053 lat (usec) : 250=85.55%, 500=14.07%, 750=0.30% 00:10:41.053 lat (msec) : 50=0.08% 00:10:41.053 cpu : usr=1.78%, sys=4.95%, ctx=3939, majf=0, minf=1 00:10:41.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.053 issued rwts: total=1890,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.053 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.053 job1: (groupid=0, jobs=1): err= 0: pid=647027: Sun Nov 17 18:31:27 2024 00:10:41.053 read: IOPS=1000, BW=4000KiB/s (4096kB/s)(4044KiB/1011msec) 00:10:41.053 slat (nsec): min=7568, max=61677, avg=15097.33, stdev=6623.37 00:10:41.053 clat (usec): min=206, max=41985, avg=733.51, stdev=3836.54 00:10:41.053 lat (usec): min=215, max=42001, avg=748.60, stdev=3837.00 00:10:41.053 clat percentiles (usec): 00:10:41.053 | 1.00th=[ 217], 5.00th=[ 269], 10.00th=[ 293], 20.00th=[ 314], 00:10:41.053 | 30.00th=[ 326], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 383], 00:10:41.053 | 70.00th=[ 404], 80.00th=[ 429], 90.00th=[ 461], 95.00th=[ 519], 00:10:41.053 | 99.00th=[ 635], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:41.053 | 99.99th=[42206] 00:10:41.053 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:10:41.053 slat (nsec): min=8481, max=71995, avg=14934.83, stdev=8253.95 00:10:41.053 clat (usec): min=141, max=2880, avg=223.95, stdev=92.59 00:10:41.053 lat (usec): min=152, max=2892, avg=238.88, stdev=92.95 00:10:41.053 clat percentiles (usec): 00:10:41.053 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 176], 20.00th=[ 202], 00:10:41.053 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:10:41.053 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 262], 95.00th=[ 289], 00:10:41.053 | 99.00th=[ 371], 99.50th=[ 396], 99.90th=[ 685], 99.95th=[ 2868], 00:10:41.053 | 99.99th=[ 2868] 00:10:41.053 bw ( KiB/s): min= 4096, max= 4096, per=16.00%, avg=4096.00, stdev= 0.00, samples=2 00:10:41.053 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:41.053 lat (usec) : 250=44.82%, 500=51.94%, 750=2.75% 00:10:41.053 lat (msec) : 4=0.05%, 50=0.44% 00:10:41.053 cpu : usr=2.18%, sys=4.16%, ctx=2036, majf=0, minf=1 00:10:41.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.053 issued rwts: total=1011,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.053 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.053 job2: (groupid=0, jobs=1): err= 0: pid=647028: Sun Nov 17 18:31:27 2024 00:10:41.053 read: IOPS=1951, BW=7808KiB/s (7995kB/s)(8120KiB/1040msec) 00:10:41.053 slat (nsec): min=5219, max=66746, avg=12493.70, stdev=6372.39 00:10:41.053 clat (usec): min=181, max=40679, avg=291.68, stdev=1268.85 00:10:41.053 lat (usec): min=188, max=40690, avg=304.17, stdev=1268.89 00:10:41.053 clat percentiles (usec): 00:10:41.053 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:10:41.053 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 241], 00:10:41.053 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 318], 95.00th=[ 416], 00:10:41.053 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 594], 99.95th=[40633], 00:10:41.053 | 99.99th=[40633] 00:10:41.053 write: IOPS=1969, BW=7877KiB/s (8066kB/s)(8192KiB/1040msec); 0 zone resets 00:10:41.053 slat (nsec): min=7258, max=68552, avg=13369.59, stdev=6527.31 00:10:41.053 clat (usec): min=133, max=546, avg=185.64, stdev=49.61 00:10:41.053 lat (usec): min=141, max=567, avg=199.01, stdev=52.69 00:10:41.053 clat percentiles (usec): 00:10:41.053 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:10:41.053 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 176], 00:10:41.053 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 277], 95.00th=[ 306], 00:10:41.053 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 420], 99.95th=[ 424], 00:10:41.053 | 99.99th=[ 545] 00:10:41.053 bw ( KiB/s): min= 7400, max= 8984, per=32.00%, avg=8192.00, stdev=1120.06, samples=2 00:10:41.053 iops : min= 1850, max= 2246, avg=2048.00, stdev=280.01, samples=2 00:10:41.053 lat (usec) : 250=76.48%, 500=22.78%, 750=0.69% 00:10:41.053 lat (msec) : 50=0.05% 00:10:41.053 cpu : usr=2.60%, sys=5.20%, ctx=4079, majf=0, minf=1 00:10:41.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.053 issued rwts: total=2030,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.053 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.053 job3: (groupid=0, jobs=1): err= 0: pid=647029: Sun Nov 17 18:31:27 2024 00:10:41.053 read: IOPS=996, BW=3984KiB/s (4080kB/s)(4104KiB/1030msec) 00:10:41.053 slat (nsec): min=5553, max=68861, avg=16821.63, stdev=10708.33 00:10:41.053 clat (usec): min=205, max=40988, avg=623.17, stdev=3575.09 00:10:41.053 lat (usec): min=215, max=41004, avg=639.99, stdev=3575.35 00:10:41.053 clat percentiles (usec): 00:10:41.053 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 233], 00:10:41.053 | 30.00th=[ 241], 40.00th=[ 253], 50.00th=[ 302], 60.00th=[ 338], 00:10:41.053 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 408], 95.00th=[ 437], 00:10:41.053 | 99.00th=[ 537], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:41.053 | 99.99th=[41157] 00:10:41.053 write: IOPS=1491, BW=5965KiB/s (6108kB/s)(6144KiB/1030msec); 0 zone resets 00:10:41.053 slat (nsec): min=6546, max=80666, avg=16338.12, stdev=8996.93 00:10:41.053 clat (usec): min=145, max=553, avg=219.24, stdev=48.63 00:10:41.053 lat (usec): min=154, max=579, avg=235.57, stdev=50.82 00:10:41.053 clat percentiles (usec): 00:10:41.053 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 180], 00:10:41.053 | 30.00th=[ 188], 40.00th=[ 200], 50.00th=[ 212], 60.00th=[ 221], 00:10:41.053 | 70.00th=[ 231], 80.00th=[ 247], 90.00th=[ 285], 95.00th=[ 322], 00:10:41.053 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 457], 99.95th=[ 553], 00:10:41.053 | 99.99th=[ 553] 00:10:41.053 bw ( KiB/s): min= 4096, max= 8192, per=24.00%, avg=6144.00, stdev=2896.31, samples=2 00:10:41.053 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:41.053 lat (usec) : 250=63.70%, 500=35.28%, 750=0.70% 00:10:41.053 lat (msec) : 50=0.31% 00:10:41.053 cpu : usr=2.14%, sys=4.47%, ctx=2563, majf=0, minf=1 00:10:41.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.053 issued rwts: total=1026,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.053 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.053 00:10:41.053 Run status group 0 (all jobs): 00:10:41.053 READ: bw=22.4MiB/s (23.5MB/s), 3984KiB/s-7808KiB/s (4080kB/s-7995kB/s), io=23.3MiB (24.4MB), run=1011-1040msec 00:10:41.053 WRITE: bw=25.0MiB/s (26.2MB/s), 4051KiB/s-8103KiB/s (4149kB/s-8297kB/s), io=26.0MiB (27.3MB), run=1011-1040msec 00:10:41.053 00:10:41.053 Disk stats (read/write): 00:10:41.053 nvme0n1: ios=1843/2048, merge=0/0, ticks=1384/345, in_queue=1729, util=94.29% 00:10:41.053 nvme0n2: ios=980/1024, merge=0/0, ticks=913/225, in_queue=1138, util=98.48% 00:10:41.053 nvme0n3: ios=1627/2048, merge=0/0, ticks=1390/367, in_queue=1757, util=98.34% 00:10:41.053 nvme0n4: ios=1048/1160, merge=0/0, ticks=1492/254, in_queue=1746, util=97.80% 00:10:41.053 18:31:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:41.053 [global] 00:10:41.053 thread=1 00:10:41.053 invalidate=1 00:10:41.053 rw=write 00:10:41.053 time_based=1 00:10:41.053 runtime=1 00:10:41.053 ioengine=libaio 00:10:41.053 direct=1 00:10:41.053 bs=4096 00:10:41.053 iodepth=128 00:10:41.053 norandommap=0 00:10:41.053 numjobs=1 00:10:41.053 00:10:41.053 verify_dump=1 00:10:41.053 verify_backlog=512 00:10:41.053 verify_state_save=0 00:10:41.053 do_verify=1 00:10:41.053 verify=crc32c-intel 00:10:41.053 [job0] 00:10:41.053 filename=/dev/nvme0n1 00:10:41.053 [job1] 00:10:41.053 filename=/dev/nvme0n2 00:10:41.053 [job2] 00:10:41.053 filename=/dev/nvme0n3 00:10:41.053 [job3] 00:10:41.053 filename=/dev/nvme0n4 00:10:41.053 Could not set queue depth (nvme0n1) 00:10:41.053 Could not set queue depth (nvme0n2) 00:10:41.053 Could not set queue depth (nvme0n3) 00:10:41.053 Could not set queue depth (nvme0n4) 00:10:41.053 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.053 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.053 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.053 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.053 fio-3.35 00:10:41.053 Starting 4 threads 00:10:42.429 00:10:42.429 job0: (groupid=0, jobs=1): err= 0: pid=647253: Sun Nov 17 18:31:28 2024 00:10:42.429 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:10:42.429 slat (usec): min=3, max=15034, avg=123.44, stdev=849.66 00:10:42.429 clat (usec): min=5049, max=31592, avg=15287.33, stdev=5087.04 00:10:42.429 lat (usec): min=5069, max=31606, avg=15410.77, stdev=5141.77 00:10:42.429 clat percentiles (usec): 00:10:42.429 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[10683], 20.00th=[11338], 00:10:42.429 | 30.00th=[11731], 40.00th=[11994], 50.00th=[13173], 60.00th=[15664], 00:10:42.429 | 70.00th=[17171], 80.00th=[18744], 90.00th=[22938], 95.00th=[25297], 00:10:42.429 | 99.00th=[29492], 99.50th=[30278], 99.90th=[31589], 99.95th=[31589], 00:10:42.429 | 99.99th=[31589] 00:10:42.429 write: IOPS=3357, BW=13.1MiB/s (13.8MB/s)(13.3MiB/1012msec); 0 zone resets 00:10:42.429 slat (usec): min=4, max=40950, avg=171.23, stdev=1162.26 00:10:42.429 clat (usec): min=1210, max=54765, avg=21622.00, stdev=10463.09 00:10:42.429 lat (usec): min=1232, max=64147, avg=21793.23, stdev=10569.10 00:10:42.429 clat percentiles (usec): 00:10:42.429 | 1.00th=[ 4555], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[11469], 00:10:42.429 | 30.00th=[14091], 40.00th=[18482], 50.00th=[21103], 60.00th=[22152], 00:10:42.429 | 70.00th=[23987], 80.00th=[30278], 90.00th=[36439], 95.00th=[41157], 00:10:42.429 | 99.00th=[53216], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:10:42.429 | 99.99th=[54789] 00:10:42.429 bw ( KiB/s): min=11632, max=14536, per=21.86%, avg=13084.00, stdev=2053.44, samples=2 00:10:42.429 iops : min= 2908, max= 3634, avg=3271.00, stdev=513.36, samples=2 00:10:42.429 lat (msec) : 2=0.03%, 4=0.09%, 10=6.01%, 20=54.70%, 50=37.82% 00:10:42.429 lat (msec) : 100=1.34% 00:10:42.429 cpu : usr=4.65%, sys=8.80%, ctx=345, majf=0, minf=1 00:10:42.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:42.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.429 issued rwts: total=3072,3398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.429 job1: (groupid=0, jobs=1): err= 0: pid=647254: Sun Nov 17 18:31:28 2024 00:10:42.429 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:10:42.429 slat (usec): min=2, max=18530, avg=109.66, stdev=723.65 00:10:42.429 clat (usec): min=8595, max=37588, avg=14343.99, stdev=4767.74 00:10:42.429 lat (usec): min=8627, max=42022, avg=14453.65, stdev=4809.87 00:10:42.429 clat percentiles (usec): 00:10:42.429 | 1.00th=[ 9503], 5.00th=[10814], 10.00th=[11076], 20.00th=[11600], 00:10:42.429 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13304], 60.00th=[14091], 00:10:42.429 | 70.00th=[14222], 80.00th=[14877], 90.00th=[17695], 95.00th=[23462], 00:10:42.429 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:10:42.429 | 99.99th=[37487] 00:10:42.429 write: IOPS=3022, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1008msec); 0 zone resets 00:10:42.429 slat (usec): min=3, max=40916, avg=226.16, stdev=1577.15 00:10:42.429 clat (msec): min=7, max=127, avg=25.51, stdev=21.41 00:10:42.429 lat (msec): min=7, max=127, avg=25.74, stdev=21.60 00:10:42.429 clat percentiles (msec): 00:10:42.429 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:10:42.429 | 30.00th=[ 13], 40.00th=[ 16], 50.00th=[ 20], 60.00th=[ 22], 00:10:42.429 | 70.00th=[ 24], 80.00th=[ 32], 90.00th=[ 57], 95.00th=[ 68], 00:10:42.429 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 128], 99.95th=[ 128], 00:10:42.429 | 99.99th=[ 128] 00:10:42.429 bw ( KiB/s): min= 9800, max=13560, per=19.52%, avg=11680.00, stdev=2658.72, samples=2 00:10:42.429 iops : min= 2450, max= 3390, avg=2920.00, stdev=664.68, samples=2 00:10:42.429 lat (msec) : 10=4.01%, 20=65.56%, 50=23.95%, 100=4.92%, 250=1.55% 00:10:42.429 cpu : usr=3.48%, sys=4.67%, ctx=326, majf=0, minf=1 00:10:42.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:42.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.429 issued rwts: total=2560,3047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.429 job2: (groupid=0, jobs=1): err= 0: pid=647259: Sun Nov 17 18:31:28 2024 00:10:42.429 read: IOPS=3816, BW=14.9MiB/s (15.6MB/s)(15.6MiB/1043msec) 00:10:42.429 slat (usec): min=2, max=29542, avg=113.33, stdev=808.07 00:10:42.429 clat (usec): min=9147, max=69678, avg=15927.18, stdev=9070.50 00:10:42.429 lat (usec): min=9151, max=69697, avg=16040.50, stdev=9104.90 00:10:42.429 clat percentiles (usec): 00:10:42.429 | 1.00th=[ 9241], 5.00th=[10945], 10.00th=[11600], 20.00th=[12780], 00:10:42.429 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:10:42.429 | 70.00th=[13960], 80.00th=[14353], 90.00th=[17433], 95.00th=[47973], 00:10:42.429 | 99.00th=[51643], 99.50th=[54264], 99.90th=[54789], 99.95th=[57934], 00:10:42.429 | 99.99th=[69731] 00:10:42.429 write: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1043msec); 0 zone resets 00:10:42.429 slat (usec): min=4, max=40852, avg=116.93, stdev=986.62 00:10:42.429 clat (usec): min=9018, max=73116, avg=14878.00, stdev=5310.44 00:10:42.429 lat (usec): min=9023, max=84024, avg=14994.93, stdev=5477.92 00:10:42.429 clat percentiles (usec): 00:10:42.429 | 1.00th=[10552], 5.00th=[10945], 10.00th=[11207], 20.00th=[11731], 00:10:42.429 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:10:42.429 | 70.00th=[14222], 80.00th=[15533], 90.00th=[21627], 95.00th=[22938], 00:10:42.429 | 99.00th=[43779], 99.50th=[43779], 99.90th=[51643], 99.95th=[52691], 00:10:42.429 | 99.99th=[72877] 00:10:42.429 bw ( KiB/s): min=12568, max=20200, per=27.38%, avg=16384.00, stdev=5396.64, samples=2 00:10:42.429 iops : min= 3142, max= 5050, avg=4096.00, stdev=1349.16, samples=2 00:10:42.429 lat (msec) : 10=1.45%, 20=86.93%, 50=10.09%, 100=1.54% 00:10:42.429 cpu : usr=6.81%, sys=9.79%, ctx=438, majf=0, minf=1 00:10:42.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:42.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.429 issued rwts: total=3981,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.429 job3: (groupid=0, jobs=1): err= 0: pid=647260: Sun Nov 17 18:31:28 2024 00:10:42.429 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:10:42.429 slat (usec): min=2, max=10007, avg=102.27, stdev=585.59 00:10:42.429 clat (usec): min=7534, max=23985, avg=13606.32, stdev=2418.52 00:10:42.429 lat (usec): min=7541, max=24003, avg=13708.59, stdev=2442.64 00:10:42.429 clat percentiles (usec): 00:10:42.429 | 1.00th=[ 7701], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[11863], 00:10:42.429 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13829], 60.00th=[14091], 00:10:42.429 | 70.00th=[14484], 80.00th=[15270], 90.00th=[15926], 95.00th=[17433], 00:10:42.429 | 99.00th=[20841], 99.50th=[22938], 99.90th=[23462], 99.95th=[23462], 00:10:42.429 | 99.99th=[23987] 00:10:42.429 write: IOPS=5043, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1004msec); 0 zone resets 00:10:42.429 slat (usec): min=3, max=4882, avg=97.21, stdev=535.69 00:10:42.429 clat (usec): min=257, max=19098, avg=12701.75, stdev=2411.40 00:10:42.429 lat (usec): min=3994, max=19136, avg=12798.96, stdev=2418.07 00:10:42.429 clat percentiles (usec): 00:10:42.429 | 1.00th=[ 6259], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[10552], 00:10:42.429 | 30.00th=[11338], 40.00th=[13042], 50.00th=[13435], 60.00th=[13960], 00:10:42.429 | 70.00th=[14222], 80.00th=[14615], 90.00th=[14877], 95.00th=[15270], 00:10:42.429 | 99.00th=[16909], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:10:42.429 | 99.99th=[19006] 00:10:42.429 bw ( KiB/s): min=19008, max=20480, per=32.99%, avg=19744.00, stdev=1040.86, samples=2 00:10:42.429 iops : min= 4752, max= 5120, avg=4936.00, stdev=260.22, samples=2 00:10:42.429 lat (usec) : 500=0.01% 00:10:42.429 lat (msec) : 4=0.01%, 10=10.45%, 20=88.23%, 50=1.29% 00:10:42.429 cpu : usr=4.49%, sys=7.98%, ctx=372, majf=0, minf=1 00:10:42.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:42.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.429 issued rwts: total=4608,5064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.429 00:10:42.429 Run status group 0 (all jobs): 00:10:42.429 READ: bw=53.3MiB/s (55.8MB/s), 9.92MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=55.6MiB (58.2MB), run=1004-1043msec 00:10:42.430 WRITE: bw=58.4MiB/s (61.3MB/s), 11.8MiB/s-19.7MiB/s (12.4MB/s-20.7MB/s), io=61.0MiB (63.9MB), run=1004-1043msec 00:10:42.430 00:10:42.430 Disk stats (read/write): 00:10:42.430 nvme0n1: ios=2582/2767, merge=0/0, ticks=38625/57004, in_queue=95629, util=91.78% 00:10:42.430 nvme0n2: ios=2206/2560, merge=0/0, ticks=15822/26436, in_queue=42258, util=98.98% 00:10:42.430 nvme0n3: ios=3132/3552, merge=0/0, ticks=15740/20795, in_queue=36535, util=95.73% 00:10:42.430 nvme0n4: ios=4153/4202, merge=0/0, ticks=17667/15817, in_queue=33484, util=94.65% 00:10:42.430 18:31:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:42.430 [global] 00:10:42.430 thread=1 00:10:42.430 invalidate=1 00:10:42.430 rw=randwrite 00:10:42.430 time_based=1 00:10:42.430 runtime=1 00:10:42.430 ioengine=libaio 00:10:42.430 direct=1 00:10:42.430 bs=4096 00:10:42.430 iodepth=128 00:10:42.430 norandommap=0 00:10:42.430 numjobs=1 00:10:42.430 00:10:42.430 verify_dump=1 00:10:42.430 verify_backlog=512 00:10:42.430 verify_state_save=0 00:10:42.430 do_verify=1 00:10:42.430 verify=crc32c-intel 00:10:42.430 [job0] 00:10:42.430 filename=/dev/nvme0n1 00:10:42.430 [job1] 00:10:42.430 filename=/dev/nvme0n2 00:10:42.430 [job2] 00:10:42.430 filename=/dev/nvme0n3 00:10:42.430 [job3] 00:10:42.430 filename=/dev/nvme0n4 00:10:42.430 Could not set queue depth (nvme0n1) 00:10:42.430 Could not set queue depth (nvme0n2) 00:10:42.430 Could not set queue depth (nvme0n3) 00:10:42.430 Could not set queue depth (nvme0n4) 00:10:42.689 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.689 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.689 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.689 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.689 fio-3.35 00:10:42.689 Starting 4 threads 00:10:44.065 00:10:44.065 job0: (groupid=0, jobs=1): err= 0: pid=647610: Sun Nov 17 18:31:30 2024 00:10:44.065 read: IOPS=4268, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1004msec) 00:10:44.065 slat (usec): min=2, max=15542, avg=116.37, stdev=793.04 00:10:44.065 clat (usec): min=1041, max=60583, avg=14431.26, stdev=7154.88 00:10:44.065 lat (usec): min=1312, max=60595, avg=14547.63, stdev=7224.53 00:10:44.065 clat percentiles (usec): 00:10:44.065 | 1.00th=[ 4015], 5.00th=[ 6915], 10.00th=[ 9241], 20.00th=[10945], 00:10:44.065 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12387], 60.00th=[13829], 00:10:44.065 | 70.00th=[15664], 80.00th=[16909], 90.00th=[20055], 95.00th=[25560], 00:10:44.065 | 99.00th=[50070], 99.50th=[53740], 99.90th=[60556], 99.95th=[60556], 00:10:44.065 | 99.99th=[60556] 00:10:44.065 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:10:44.065 slat (usec): min=3, max=13913, avg=97.45, stdev=680.51 00:10:44.065 clat (usec): min=571, max=60593, avg=14143.67, stdev=6054.14 00:10:44.065 lat (usec): min=586, max=60626, avg=14241.13, stdev=6096.26 00:10:44.065 clat percentiles (usec): 00:10:44.065 | 1.00th=[ 4686], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10683], 00:10:44.065 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11994], 60.00th=[13173], 00:10:44.065 | 70.00th=[14615], 80.00th=[17957], 90.00th=[21365], 95.00th=[24511], 00:10:44.065 | 99.00th=[38011], 99.50th=[42206], 99.90th=[46400], 99.95th=[48497], 00:10:44.065 | 99.99th=[60556] 00:10:44.065 bw ( KiB/s): min=17232, max=19632, per=33.48%, avg=18432.00, stdev=1697.06, samples=2 00:10:44.065 iops : min= 4308, max= 4908, avg=4608.00, stdev=424.26, samples=2 00:10:44.065 lat (usec) : 750=0.02%, 1000=0.08% 00:10:44.065 lat (msec) : 2=0.42%, 4=0.37%, 10=11.09%, 20=73.49%, 50=14.02% 00:10:44.065 lat (msec) : 100=0.52% 00:10:44.065 cpu : usr=4.99%, sys=5.98%, ctx=310, majf=0, minf=2 00:10:44.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:44.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.065 issued rwts: total=4286,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.065 job1: (groupid=0, jobs=1): err= 0: pid=647611: Sun Nov 17 18:31:30 2024 00:10:44.065 read: IOPS=2803, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1007msec) 00:10:44.065 slat (usec): min=2, max=33316, avg=197.28, stdev=1372.81 00:10:44.065 clat (usec): min=727, max=90646, avg=23270.97, stdev=15809.92 00:10:44.065 lat (usec): min=5540, max=90662, avg=23468.25, stdev=15949.13 00:10:44.065 clat percentiles (usec): 00:10:44.065 | 1.00th=[ 6325], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[10814], 00:10:44.065 | 30.00th=[11076], 40.00th=[11731], 50.00th=[15926], 60.00th=[24249], 00:10:44.065 | 70.00th=[30278], 80.00th=[37487], 90.00th=[44827], 95.00th=[51119], 00:10:44.065 | 99.00th=[82314], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:10:44.065 | 99.99th=[90702] 00:10:44.065 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:10:44.065 slat (usec): min=3, max=26064, avg=138.50, stdev=981.97 00:10:44.065 clat (usec): min=6014, max=67090, avg=19819.66, stdev=11453.75 00:10:44.065 lat (usec): min=6018, max=67118, avg=19958.15, stdev=11540.41 00:10:44.065 clat percentiles (usec): 00:10:44.065 | 1.00th=[ 7373], 5.00th=[10159], 10.00th=[10814], 20.00th=[11469], 00:10:44.065 | 30.00th=[11863], 40.00th=[12387], 50.00th=[15139], 60.00th=[19792], 00:10:44.065 | 70.00th=[22938], 80.00th=[26346], 90.00th=[34341], 95.00th=[41681], 00:10:44.065 | 99.00th=[63177], 99.50th=[63177], 99.90th=[63177], 99.95th=[66323], 00:10:44.065 | 99.99th=[66847] 00:10:44.065 bw ( KiB/s): min= 8192, max=16384, per=22.32%, avg=12288.00, stdev=5792.62, samples=2 00:10:44.065 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:10:44.065 lat (usec) : 750=0.02% 00:10:44.065 lat (msec) : 10=6.96%, 20=50.70%, 50=37.57%, 100=4.75% 00:10:44.065 cpu : usr=2.58%, sys=4.47%, ctx=340, majf=0, minf=1 00:10:44.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:44.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.065 issued rwts: total=2823,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.065 job2: (groupid=0, jobs=1): err= 0: pid=647612: Sun Nov 17 18:31:30 2024 00:10:44.065 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:44.065 slat (usec): min=3, max=19402, avg=149.74, stdev=949.58 00:10:44.065 clat (usec): min=7144, max=69982, avg=18955.48, stdev=10869.86 00:10:44.065 lat (usec): min=7189, max=70005, avg=19105.22, stdev=10962.07 00:10:44.065 clat percentiles (usec): 00:10:44.065 | 1.00th=[ 9765], 5.00th=[11600], 10.00th=[12125], 20.00th=[12649], 00:10:44.065 | 30.00th=[13042], 40.00th=[13698], 50.00th=[14746], 60.00th=[17171], 00:10:44.065 | 70.00th=[18482], 80.00th=[20841], 90.00th=[34341], 95.00th=[46924], 00:10:44.065 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65274], 99.95th=[65799], 00:10:44.065 | 99.99th=[69731] 00:10:44.065 write: IOPS=3667, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1001msec); 0 zone resets 00:10:44.065 slat (usec): min=4, max=9470, avg=109.79, stdev=586.78 00:10:44.065 clat (usec): min=419, max=50529, avg=16031.60, stdev=6990.04 00:10:44.065 lat (usec): min=758, max=50538, avg=16141.40, stdev=7030.12 00:10:44.065 clat percentiles (usec): 00:10:44.065 | 1.00th=[ 6456], 5.00th=[ 8029], 10.00th=[10814], 20.00th=[11731], 00:10:44.065 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13566], 60.00th=[14615], 00:10:44.065 | 70.00th=[16909], 80.00th=[19792], 90.00th=[25297], 95.00th=[30802], 00:10:44.065 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:10:44.065 | 99.99th=[50594] 00:10:44.065 bw ( KiB/s): min=16384, max=16384, per=29.76%, avg=16384.00, stdev= 0.00, samples=1 00:10:44.065 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:44.065 lat (usec) : 500=0.01%, 1000=0.12% 00:10:44.065 lat (msec) : 4=0.30%, 10=5.16%, 20=74.06%, 50=18.39%, 100=1.96% 00:10:44.065 cpu : usr=5.10%, sys=10.40%, ctx=342, majf=0, minf=1 00:10:44.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:44.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.065 issued rwts: total=3584,3671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.065 job3: (groupid=0, jobs=1): err= 0: pid=647613: Sun Nov 17 18:31:30 2024 00:10:44.065 read: IOPS=2578, BW=10.1MiB/s (10.6MB/s)(10.6MiB/1048msec) 00:10:44.066 slat (usec): min=2, max=21231, avg=154.98, stdev=1135.37 00:10:44.066 clat (usec): min=1465, max=87629, avg=23912.70, stdev=14961.97 00:10:44.066 lat (usec): min=1482, max=95298, avg=24067.69, stdev=15050.25 00:10:44.066 clat percentiles (usec): 00:10:44.066 | 1.00th=[ 5342], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[13042], 00:10:44.066 | 30.00th=[14615], 40.00th=[16909], 50.00th=[19268], 60.00th=[21365], 00:10:44.066 | 70.00th=[25560], 80.00th=[34341], 90.00th=[49546], 95.00th=[56361], 00:10:44.066 | 99.00th=[68682], 99.50th=[82314], 99.90th=[86508], 99.95th=[86508], 00:10:44.066 | 99.99th=[87557] 00:10:44.066 write: IOPS=2931, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1048msec); 0 zone resets 00:10:44.066 slat (usec): min=3, max=14223, avg=170.57, stdev=1042.15 00:10:44.066 clat (msec): min=3, max=107, avg=22.23, stdev=20.31 00:10:44.066 lat (msec): min=3, max=107, avg=22.40, stdev=20.44 00:10:44.066 clat percentiles (msec): 00:10:44.066 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 13], 00:10:44.066 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 20], 00:10:44.066 | 70.00th=[ 22], 80.00th=[ 22], 90.00th=[ 41], 95.00th=[ 85], 00:10:44.066 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 108], 99.95th=[ 108], 00:10:44.066 | 99.99th=[ 108] 00:10:44.066 bw ( KiB/s): min=11768, max=12808, per=22.32%, avg=12288.00, stdev=735.39, samples=2 00:10:44.066 iops : min= 2942, max= 3202, avg=3072.00, stdev=183.85, samples=2 00:10:44.066 lat (msec) : 2=0.02%, 4=0.31%, 10=7.27%, 20=50.88%, 50=32.72% 00:10:44.066 lat (msec) : 100=8.12%, 250=0.68% 00:10:44.066 cpu : usr=2.48%, sys=4.20%, ctx=239, majf=0, minf=1 00:10:44.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:44.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.066 issued rwts: total=2702,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.066 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.066 00:10:44.066 Run status group 0 (all jobs): 00:10:44.066 READ: bw=49.9MiB/s (52.4MB/s), 10.1MiB/s-16.7MiB/s (10.6MB/s-17.5MB/s), io=52.3MiB (54.9MB), run=1001-1048msec 00:10:44.066 WRITE: bw=53.8MiB/s (56.4MB/s), 11.5MiB/s-17.9MiB/s (12.0MB/s-18.8MB/s), io=56.3MiB (59.1MB), run=1001-1048msec 00:10:44.066 00:10:44.066 Disk stats (read/write): 00:10:44.066 nvme0n1: ios=3609/3631, merge=0/0, ticks=49103/45561, in_queue=94664, util=98.40% 00:10:44.066 nvme0n2: ios=2589/2914, merge=0/0, ticks=22518/20803, in_queue=43321, util=98.17% 00:10:44.066 nvme0n3: ios=3162/3584, merge=0/0, ticks=24118/27772, in_queue=51890, util=97.29% 00:10:44.066 nvme0n4: ios=2088/2297, merge=0/0, ticks=25597/35009, in_queue=60606, util=98.01% 00:10:44.066 18:31:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:44.066 18:31:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=647752 00:10:44.066 18:31:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:44.066 18:31:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:44.066 [global] 00:10:44.066 thread=1 00:10:44.066 invalidate=1 00:10:44.066 rw=read 00:10:44.066 time_based=1 00:10:44.066 runtime=10 00:10:44.066 ioengine=libaio 00:10:44.066 direct=1 00:10:44.066 bs=4096 00:10:44.066 iodepth=1 00:10:44.066 norandommap=1 00:10:44.066 numjobs=1 00:10:44.066 00:10:44.066 [job0] 00:10:44.066 filename=/dev/nvme0n1 00:10:44.066 [job1] 00:10:44.066 filename=/dev/nvme0n2 00:10:44.066 [job2] 00:10:44.066 filename=/dev/nvme0n3 00:10:44.066 [job3] 00:10:44.066 filename=/dev/nvme0n4 00:10:44.066 Could not set queue depth (nvme0n1) 00:10:44.066 Could not set queue depth (nvme0n2) 00:10:44.066 Could not set queue depth (nvme0n3) 00:10:44.066 Could not set queue depth (nvme0n4) 00:10:44.066 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.066 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.066 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.066 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.066 fio-3.35 00:10:44.066 Starting 4 threads 00:10:47.346 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:47.346 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:47.346 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1245184, buflen=4096 00:10:47.346 fio: pid=647847, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:47.604 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.604 18:31:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:47.604 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=516096, buflen=4096 00:10:47.604 fio: pid=647846, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:47.862 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.862 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:47.862 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1712128, buflen=4096 00:10:47.862 fio: pid=647844, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:48.121 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=15179776, buflen=4096 00:10:48.121 fio: pid=647845, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:48.121 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.121 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:48.121 00:10:48.121 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=647844: Sun Nov 17 18:31:34 2024 00:10:48.121 read: IOPS=116, BW=467KiB/s (478kB/s)(1672KiB/3584msec) 00:10:48.121 slat (usec): min=7, max=16925, avg=86.17, stdev=980.62 00:10:48.121 clat (usec): min=212, max=42018, avg=8426.83, stdev=16262.93 00:10:48.121 lat (usec): min=229, max=58024, avg=8513.16, stdev=16425.10 00:10:48.121 clat percentiles (usec): 00:10:48.121 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 237], 00:10:48.121 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 297], 00:10:48.121 | 70.00th=[ 371], 80.00th=[40633], 90.00th=[40633], 95.00th=[41157], 00:10:48.121 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:48.121 | 99.99th=[42206] 00:10:48.121 bw ( KiB/s): min= 176, max= 1008, per=11.26%, avg=534.67, stdev=312.86, samples=6 00:10:48.121 iops : min= 44, max= 252, avg=133.67, stdev=78.21, samples=6 00:10:48.121 lat (usec) : 250=44.63%, 500=32.94%, 750=1.91%, 1000=0.24% 00:10:48.121 lat (msec) : 50=20.05% 00:10:48.121 cpu : usr=0.06%, sys=0.42%, ctx=422, majf=0, minf=1 00:10:48.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.121 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.121 issued rwts: total=419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.121 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=647845: Sun Nov 17 18:31:34 2024 00:10:48.121 read: IOPS=964, BW=3858KiB/s (3951kB/s)(14.5MiB/3842msec) 00:10:48.121 slat (usec): min=4, max=8804, avg=11.18, stdev=192.54 00:10:48.121 clat (usec): min=161, max=42473, avg=1017.50, stdev=5758.85 00:10:48.121 lat (usec): min=167, max=42506, avg=1028.69, stdev=5763.85 00:10:48.121 clat percentiles (usec): 00:10:48.121 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:10:48.121 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:10:48.121 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 239], 00:10:48.121 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:48.121 | 99.99th=[42730] 00:10:48.121 bw ( KiB/s): min= 96, max=17501, per=55.07%, avg=2611.00, stdev=6565.98, samples=7 00:10:48.121 iops : min= 24, max= 4375, avg=652.71, stdev=1641.40, samples=7 00:10:48.121 lat (usec) : 250=96.44%, 500=1.56% 00:10:48.121 lat (msec) : 50=1.97% 00:10:48.121 cpu : usr=0.31%, sys=0.94%, ctx=3710, majf=0, minf=1 00:10:48.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.121 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.121 issued rwts: total=3707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.121 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=647846: Sun Nov 17 18:31:34 2024 00:10:48.121 read: IOPS=38, BW=154KiB/s (157kB/s)(504KiB/3277msec) 00:10:48.121 slat (nsec): min=9017, max=59158, avg=24642.48, stdev=9673.28 00:10:48.121 clat (usec): min=258, max=42220, avg=25790.33, stdev=20029.88 00:10:48.121 lat (usec): min=279, max=42232, avg=25815.02, stdev=20027.84 00:10:48.121 clat percentiles (usec): 00:10:48.121 | 1.00th=[ 262], 5.00th=[ 289], 10.00th=[ 310], 20.00th=[ 379], 00:10:48.121 | 30.00th=[ 416], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:10:48.121 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:48.121 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:48.121 | 99.99th=[42206] 00:10:48.121 bw ( KiB/s): min= 112, max= 224, per=3.37%, avg=160.00, stdev=43.23, samples=6 00:10:48.121 iops : min= 28, max= 56, avg=40.00, stdev=10.81, samples=6 00:10:48.121 lat (usec) : 500=37.80% 00:10:48.121 lat (msec) : 50=61.42% 00:10:48.121 cpu : usr=0.00%, sys=0.18%, ctx=129, majf=0, minf=2 00:10:48.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.121 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.121 issued rwts: total=127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.121 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=647847: Sun Nov 17 18:31:34 2024 00:10:48.121 read: IOPS=102, BW=410KiB/s (419kB/s)(1216KiB/2969msec) 00:10:48.121 slat (nsec): min=8632, max=49829, avg=15413.44, stdev=9010.55 00:10:48.121 clat (usec): min=205, max=41300, avg=9667.22, stdev=17153.37 00:10:48.121 lat (usec): min=214, max=41335, avg=9682.57, stdev=17159.24 00:10:48.121 clat percentiles (usec): 00:10:48.121 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 233], 20.00th=[ 243], 00:10:48.121 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 293], 00:10:48.121 | 70.00th=[ 318], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:48.121 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:48.121 | 99.99th=[41157] 00:10:48.121 bw ( KiB/s): min= 96, max= 1944, per=9.87%, avg=468.80, stdev=824.67, samples=5 00:10:48.121 iops : min= 24, max= 486, avg=117.20, stdev=206.17, samples=5 00:10:48.121 lat (usec) : 250=31.48%, 500=44.59%, 750=0.33% 00:10:48.121 lat (msec) : 10=0.33%, 50=22.95% 00:10:48.121 cpu : usr=0.00%, sys=0.30%, ctx=308, majf=0, minf=1 00:10:48.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.121 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.121 issued rwts: total=305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.121 00:10:48.121 Run status group 0 (all jobs): 00:10:48.122 READ: bw=4741KiB/s (4855kB/s), 154KiB/s-3858KiB/s (157kB/s-3951kB/s), io=17.8MiB (18.7MB), run=2969-3842msec 00:10:48.122 00:10:48.122 Disk stats (read/write): 00:10:48.122 nvme0n1: ios=456/0, merge=0/0, ticks=4467/0, in_queue=4467, util=99.83% 00:10:48.122 nvme0n2: ios=2731/0, merge=0/0, ticks=3566/0, in_queue=3566, util=96.60% 00:10:48.122 nvme0n3: ios=164/0, merge=0/0, ticks=3802/0, in_queue=3802, util=100.00% 00:10:48.122 nvme0n4: ios=351/0, merge=0/0, ticks=3474/0, in_queue=3474, util=100.00% 00:10:48.380 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.380 18:31:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:48.638 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.638 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:48.896 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.896 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:49.155 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.155 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:49.721 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:49.721 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 647752 00:10:49.721 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:49.721 18:31:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.721 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.721 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:49.721 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:49.721 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.721 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:49.721 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.721 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:49.721 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:49.721 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:49.721 nvmf hotplug test: fio failed as expected 00:10:49.721 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:49.978 rmmod nvme_tcp 00:10:49.978 rmmod nvme_fabrics 00:10:49.978 rmmod nvme_keyring 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 645713 ']' 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 645713 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 645713 ']' 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 645713 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 645713 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 645713' 00:10:49.978 killing process with pid 645713 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 645713 00:10:49.978 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 645713 00:10:50.237 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.238 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.238 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.238 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:50.238 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:50.238 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.238 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.238 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.238 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.238 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.238 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.238 18:31:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.777 00:10:52.777 real 0m24.239s 00:10:52.777 user 1m25.676s 00:10:52.777 sys 0m6.575s 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.777 ************************************ 00:10:52.777 END TEST nvmf_fio_target 00:10:52.777 ************************************ 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.777 ************************************ 00:10:52.777 START TEST nvmf_bdevio 00:10:52.777 ************************************ 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:52.777 * Looking for test storage... 00:10:52.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:52.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.777 --rc genhtml_branch_coverage=1 00:10:52.777 --rc genhtml_function_coverage=1 00:10:52.777 --rc genhtml_legend=1 00:10:52.777 --rc geninfo_all_blocks=1 00:10:52.777 --rc geninfo_unexecuted_blocks=1 00:10:52.777 00:10:52.777 ' 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:52.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.777 --rc genhtml_branch_coverage=1 00:10:52.777 --rc genhtml_function_coverage=1 00:10:52.777 --rc genhtml_legend=1 00:10:52.777 --rc geninfo_all_blocks=1 00:10:52.777 --rc geninfo_unexecuted_blocks=1 00:10:52.777 00:10:52.777 ' 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:52.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.777 --rc genhtml_branch_coverage=1 00:10:52.777 --rc genhtml_function_coverage=1 00:10:52.777 --rc genhtml_legend=1 00:10:52.777 --rc geninfo_all_blocks=1 00:10:52.777 --rc geninfo_unexecuted_blocks=1 00:10:52.777 00:10:52.777 ' 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:52.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.777 --rc genhtml_branch_coverage=1 00:10:52.777 --rc genhtml_function_coverage=1 00:10:52.777 --rc genhtml_legend=1 00:10:52.777 --rc geninfo_all_blocks=1 00:10:52.777 --rc geninfo_unexecuted_blocks=1 00:10:52.777 00:10:52.777 ' 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.777 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.778 18:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:54.683 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:54.684 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:54.684 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:54.684 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:54.684 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:54.684 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:54.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:10:54.943 00:10:54.943 --- 10.0.0.2 ping statistics --- 00:10:54.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.943 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:54.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:10:54.943 00:10:54.943 --- 10.0.0.1 ping statistics --- 00:10:54.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.943 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=650487 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 650487 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 650487 ']' 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.943 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.943 [2024-11-17 18:31:41.388307] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:54.943 [2024-11-17 18:31:41.388390] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.943 [2024-11-17 18:31:41.463577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.943 [2024-11-17 18:31:41.511409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.943 [2024-11-17 18:31:41.511488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.943 [2024-11-17 18:31:41.511502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.943 [2024-11-17 18:31:41.511513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.943 [2024-11-17 18:31:41.511522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.943 [2024-11-17 18:31:41.513212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:54.943 [2024-11-17 18:31:41.513275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:54.943 [2024-11-17 18:31:41.513342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:54.943 [2024-11-17 18:31:41.513346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.202 [2024-11-17 18:31:41.653431] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.202 Malloc0 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.202 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.203 [2024-11-17 18:31:41.713768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:55.203 { 00:10:55.203 "params": { 00:10:55.203 "name": "Nvme$subsystem", 00:10:55.203 "trtype": "$TEST_TRANSPORT", 00:10:55.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:55.203 "adrfam": "ipv4", 00:10:55.203 "trsvcid": "$NVMF_PORT", 00:10:55.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:55.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:55.203 "hdgst": ${hdgst:-false}, 00:10:55.203 "ddgst": ${ddgst:-false} 00:10:55.203 }, 00:10:55.203 "method": "bdev_nvme_attach_controller" 00:10:55.203 } 00:10:55.203 EOF 00:10:55.203 )") 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:55.203 18:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:55.203 "params": { 00:10:55.203 "name": "Nvme1", 00:10:55.203 "trtype": "tcp", 00:10:55.203 "traddr": "10.0.0.2", 00:10:55.203 "adrfam": "ipv4", 00:10:55.203 "trsvcid": "4420", 00:10:55.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:55.203 "hdgst": false, 00:10:55.203 "ddgst": false 00:10:55.203 }, 00:10:55.203 "method": "bdev_nvme_attach_controller" 00:10:55.203 }' 00:10:55.203 [2024-11-17 18:31:41.762886] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:10:55.203 [2024-11-17 18:31:41.762984] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid650630 ] 00:10:55.461 [2024-11-17 18:31:41.835436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.461 [2024-11-17 18:31:41.887684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.461 [2024-11-17 18:31:41.887739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.461 [2024-11-17 18:31:41.887743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.719 I/O targets: 00:10:55.719 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:55.719 00:10:55.719 00:10:55.719 CUnit - A unit testing framework for C - Version 2.1-3 00:10:55.719 http://cunit.sourceforge.net/ 00:10:55.719 00:10:55.719 00:10:55.719 Suite: bdevio tests on: Nvme1n1 00:10:55.719 Test: blockdev write read block ...passed 00:10:55.719 Test: blockdev write zeroes read block ...passed 00:10:55.719 Test: blockdev write zeroes read no split ...passed 00:10:55.977 Test: blockdev write zeroes read split ...passed 00:10:55.977 Test: blockdev write zeroes read split partial ...passed 00:10:55.977 Test: blockdev reset ...[2024-11-17 18:31:42.312972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:55.977 [2024-11-17 18:31:42.313091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2094ac0 (9): Bad file descriptor 00:10:55.977 [2024-11-17 18:31:42.456043] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:55.977 passed 00:10:55.977 Test: blockdev write read 8 blocks ...passed 00:10:55.977 Test: blockdev write read size > 128k ...passed 00:10:55.977 Test: blockdev write read invalid size ...passed 00:10:55.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:55.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:55.977 Test: blockdev write read max offset ...passed 00:10:56.235 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.235 Test: blockdev writev readv 8 blocks ...passed 00:10:56.235 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.235 Test: blockdev writev readv block ...passed 00:10:56.235 Test: blockdev writev readv size > 128k ...passed 00:10:56.235 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.235 Test: blockdev comparev and writev ...[2024-11-17 18:31:42.711932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.235 [2024-11-17 18:31:42.711970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:56.235 [2024-11-17 18:31:42.711996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.235 [2024-11-17 18:31:42.712014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:56.235 [2024-11-17 18:31:42.712402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.235 [2024-11-17 18:31:42.712427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:56.235 [2024-11-17 18:31:42.712450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.235 [2024-11-17 18:31:42.712468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:56.235 [2024-11-17 18:31:42.712841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.235 [2024-11-17 18:31:42.712866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:56.235 [2024-11-17 18:31:42.712888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.235 [2024-11-17 18:31:42.712905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:56.235 [2024-11-17 18:31:42.713226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.235 [2024-11-17 18:31:42.713250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:56.235 [2024-11-17 18:31:42.713273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.235 [2024-11-17 18:31:42.713290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:56.235 passed 00:10:56.235 Test: blockdev nvme passthru rw ...passed 00:10:56.235 Test: blockdev nvme passthru vendor specific ...[2024-11-17 18:31:42.796956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.235 [2024-11-17 18:31:42.796985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:56.235 [2024-11-17 18:31:42.797124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.235 [2024-11-17 18:31:42.797148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:56.235 [2024-11-17 18:31:42.797282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.235 [2024-11-17 18:31:42.797305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:56.235 [2024-11-17 18:31:42.797439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.235 [2024-11-17 18:31:42.797462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:56.235 passed 00:10:56.494 Test: blockdev nvme admin passthru ...passed 00:10:56.494 Test: blockdev copy ...passed 00:10:56.494 00:10:56.494 Run Summary: Type Total Ran Passed Failed Inactive 00:10:56.494 suites 1 1 n/a 0 0 00:10:56.494 tests 23 23 23 0 0 00:10:56.494 asserts 152 152 152 0 n/a 00:10:56.494 00:10:56.494 Elapsed time = 1.396 seconds 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:56.494 rmmod nvme_tcp 00:10:56.494 rmmod nvme_fabrics 00:10:56.494 rmmod nvme_keyring 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 650487 ']' 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 650487 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 650487 ']' 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 650487 00:10:56.494 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:56.753 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.753 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650487 00:10:56.753 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:56.753 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:56.753 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650487' 00:10:56.753 killing process with pid 650487 00:10:56.753 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 650487 00:10:56.753 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 650487 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.012 18:31:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.920 18:31:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:58.920 00:10:58.920 real 0m6.618s 00:10:58.920 user 0m10.678s 00:10:58.920 sys 0m2.258s 00:10:58.920 18:31:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.920 18:31:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.920 ************************************ 00:10:58.920 END TEST nvmf_bdevio 00:10:58.920 ************************************ 00:10:58.920 18:31:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:58.920 00:10:58.920 real 3m55.480s 00:10:58.920 user 10m15.024s 00:10:58.920 sys 1m6.974s 00:10:58.920 18:31:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.920 18:31:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:58.920 ************************************ 00:10:58.920 END TEST nvmf_target_core 00:10:58.920 ************************************ 00:10:58.920 18:31:45 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:58.920 18:31:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:58.920 18:31:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.920 18:31:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.920 ************************************ 00:10:58.920 START TEST nvmf_target_extra 00:10:58.920 ************************************ 00:10:58.920 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:59.179 * Looking for test storage... 00:10:59.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:59.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.179 --rc genhtml_branch_coverage=1 00:10:59.179 --rc genhtml_function_coverage=1 00:10:59.179 --rc genhtml_legend=1 00:10:59.179 --rc geninfo_all_blocks=1 00:10:59.179 --rc geninfo_unexecuted_blocks=1 00:10:59.179 00:10:59.179 ' 00:10:59.179 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:59.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.179 --rc genhtml_branch_coverage=1 00:10:59.179 --rc genhtml_function_coverage=1 00:10:59.179 --rc genhtml_legend=1 00:10:59.179 --rc geninfo_all_blocks=1 00:10:59.179 --rc geninfo_unexecuted_blocks=1 00:10:59.179 00:10:59.179 ' 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:59.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.180 --rc genhtml_branch_coverage=1 00:10:59.180 --rc genhtml_function_coverage=1 00:10:59.180 --rc genhtml_legend=1 00:10:59.180 --rc geninfo_all_blocks=1 00:10:59.180 --rc geninfo_unexecuted_blocks=1 00:10:59.180 00:10:59.180 ' 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:59.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.180 --rc genhtml_branch_coverage=1 00:10:59.180 --rc genhtml_function_coverage=1 00:10:59.180 --rc genhtml_legend=1 00:10:59.180 --rc geninfo_all_blocks=1 00:10:59.180 --rc geninfo_unexecuted_blocks=1 00:10:59.180 00:10:59.180 ' 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.180 ************************************ 00:10:59.180 START TEST nvmf_example 00:10:59.180 ************************************ 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:59.180 * Looking for test storage... 00:10:59.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:59.180 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.439 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:59.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.439 --rc genhtml_branch_coverage=1 00:10:59.439 --rc genhtml_function_coverage=1 00:10:59.440 --rc genhtml_legend=1 00:10:59.440 --rc geninfo_all_blocks=1 00:10:59.440 --rc geninfo_unexecuted_blocks=1 00:10:59.440 00:10:59.440 ' 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:59.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.440 --rc genhtml_branch_coverage=1 00:10:59.440 --rc genhtml_function_coverage=1 00:10:59.440 --rc genhtml_legend=1 00:10:59.440 --rc geninfo_all_blocks=1 00:10:59.440 --rc geninfo_unexecuted_blocks=1 00:10:59.440 00:10:59.440 ' 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:59.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.440 --rc genhtml_branch_coverage=1 00:10:59.440 --rc genhtml_function_coverage=1 00:10:59.440 --rc genhtml_legend=1 00:10:59.440 --rc geninfo_all_blocks=1 00:10:59.440 --rc geninfo_unexecuted_blocks=1 00:10:59.440 00:10:59.440 ' 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:59.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.440 --rc genhtml_branch_coverage=1 00:10:59.440 --rc genhtml_function_coverage=1 00:10:59.440 --rc genhtml_legend=1 00:10:59.440 --rc geninfo_all_blocks=1 00:10:59.440 --rc geninfo_unexecuted_blocks=1 00:10:59.440 00:10:59.440 ' 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:59.440 18:31:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.978 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:01.979 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:01.979 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:01.979 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.979 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:01.979 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:11:01.979 00:11:01.979 --- 10.0.0.2 ping statistics --- 00:11:01.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.979 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:11:01.979 00:11:01.979 --- 10.0.0.1 ping statistics --- 00:11:01.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.979 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=652778 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 652778 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 652778 ']' 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.979 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:01.980 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:14.178 Initializing NVMe Controllers 00:11:14.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:14.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:14.178 Initialization complete. Launching workers. 00:11:14.178 ======================================================== 00:11:14.178 Latency(us) 00:11:14.178 Device Information : IOPS MiB/s Average min max 00:11:14.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15378.03 60.07 4161.43 692.12 15232.20 00:11:14.178 ======================================================== 00:11:14.178 Total : 15378.03 60.07 4161.43 692.12 15232.20 00:11:14.178 00:11:14.178 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:14.178 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:14.178 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.178 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:14.178 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.178 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:14.178 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.178 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.178 rmmod nvme_tcp 00:11:14.179 rmmod nvme_fabrics 00:11:14.179 rmmod nvme_keyring 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 652778 ']' 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 652778 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 652778 ']' 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 652778 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 652778 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 652778' 00:11:14.179 killing process with pid 652778 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 652778 00:11:14.179 18:31:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 652778 00:11:14.179 nvmf threads initialize successfully 00:11:14.179 bdev subsystem init successfully 00:11:14.179 created a nvmf target service 00:11:14.179 create targets's poll groups done 00:11:14.179 all subsystems of target started 00:11:14.179 nvmf target is running 00:11:14.179 all subsystems of target stopped 00:11:14.179 destroy targets's poll groups done 00:11:14.179 destroyed the nvmf target service 00:11:14.179 bdev subsystem finish successfully 00:11:14.179 nvmf threads destroy successfully 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.179 18:31:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.747 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:14.747 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:14.747 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.747 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.747 00:11:14.747 real 0m15.644s 00:11:14.747 user 0m42.950s 00:11:14.747 sys 0m3.432s 00:11:14.747 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.747 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.747 ************************************ 00:11:14.747 END TEST nvmf_example 00:11:14.747 ************************************ 00:11:14.747 18:32:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:14.747 18:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.747 18:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.747 18:32:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:15.008 ************************************ 00:11:15.008 START TEST nvmf_filesystem 00:11:15.008 ************************************ 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:15.008 * Looking for test storage... 00:11:15.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.008 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:15.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.009 --rc genhtml_branch_coverage=1 00:11:15.009 --rc genhtml_function_coverage=1 00:11:15.009 --rc genhtml_legend=1 00:11:15.009 --rc geninfo_all_blocks=1 00:11:15.009 --rc geninfo_unexecuted_blocks=1 00:11:15.009 00:11:15.009 ' 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:15.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.009 --rc genhtml_branch_coverage=1 00:11:15.009 --rc genhtml_function_coverage=1 00:11:15.009 --rc genhtml_legend=1 00:11:15.009 --rc geninfo_all_blocks=1 00:11:15.009 --rc geninfo_unexecuted_blocks=1 00:11:15.009 00:11:15.009 ' 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:15.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.009 --rc genhtml_branch_coverage=1 00:11:15.009 --rc genhtml_function_coverage=1 00:11:15.009 --rc genhtml_legend=1 00:11:15.009 --rc geninfo_all_blocks=1 00:11:15.009 --rc geninfo_unexecuted_blocks=1 00:11:15.009 00:11:15.009 ' 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:15.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.009 --rc genhtml_branch_coverage=1 00:11:15.009 --rc genhtml_function_coverage=1 00:11:15.009 --rc genhtml_legend=1 00:11:15.009 --rc geninfo_all_blocks=1 00:11:15.009 --rc geninfo_unexecuted_blocks=1 00:11:15.009 00:11:15.009 ' 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:15.009 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:15.010 #define SPDK_CONFIG_H 00:11:15.010 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:15.010 #define SPDK_CONFIG_APPS 1 00:11:15.010 #define SPDK_CONFIG_ARCH native 00:11:15.010 #undef SPDK_CONFIG_ASAN 00:11:15.010 #undef SPDK_CONFIG_AVAHI 00:11:15.010 #undef SPDK_CONFIG_CET 00:11:15.010 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:15.010 #define SPDK_CONFIG_COVERAGE 1 00:11:15.010 #define SPDK_CONFIG_CROSS_PREFIX 00:11:15.010 #undef SPDK_CONFIG_CRYPTO 00:11:15.010 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:15.010 #undef SPDK_CONFIG_CUSTOMOCF 00:11:15.010 #undef SPDK_CONFIG_DAOS 00:11:15.010 #define SPDK_CONFIG_DAOS_DIR 00:11:15.010 #define SPDK_CONFIG_DEBUG 1 00:11:15.010 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:15.010 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:15.010 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:15.010 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:15.010 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:15.010 #undef SPDK_CONFIG_DPDK_UADK 00:11:15.010 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:15.010 #define SPDK_CONFIG_EXAMPLES 1 00:11:15.010 #undef SPDK_CONFIG_FC 00:11:15.010 #define SPDK_CONFIG_FC_PATH 00:11:15.010 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:15.010 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:15.010 #define SPDK_CONFIG_FSDEV 1 00:11:15.010 #undef SPDK_CONFIG_FUSE 00:11:15.010 #undef SPDK_CONFIG_FUZZER 00:11:15.010 #define SPDK_CONFIG_FUZZER_LIB 00:11:15.010 #undef SPDK_CONFIG_GOLANG 00:11:15.010 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:15.010 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:15.010 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:15.010 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:15.010 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:15.010 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:15.010 #undef SPDK_CONFIG_HAVE_LZ4 00:11:15.010 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:15.010 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:15.010 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:15.010 #define SPDK_CONFIG_IDXD 1 00:11:15.010 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:15.010 #undef SPDK_CONFIG_IPSEC_MB 00:11:15.010 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:15.010 #define SPDK_CONFIG_ISAL 1 00:11:15.010 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:15.010 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:15.010 #define SPDK_CONFIG_LIBDIR 00:11:15.010 #undef SPDK_CONFIG_LTO 00:11:15.010 #define SPDK_CONFIG_MAX_LCORES 128 00:11:15.010 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:15.010 #define SPDK_CONFIG_NVME_CUSE 1 00:11:15.010 #undef SPDK_CONFIG_OCF 00:11:15.010 #define SPDK_CONFIG_OCF_PATH 00:11:15.010 #define SPDK_CONFIG_OPENSSL_PATH 00:11:15.010 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:15.010 #define SPDK_CONFIG_PGO_DIR 00:11:15.010 #undef SPDK_CONFIG_PGO_USE 00:11:15.010 #define SPDK_CONFIG_PREFIX /usr/local 00:11:15.010 #undef SPDK_CONFIG_RAID5F 00:11:15.010 #undef SPDK_CONFIG_RBD 00:11:15.010 #define SPDK_CONFIG_RDMA 1 00:11:15.010 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:15.010 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:15.010 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:15.010 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:15.010 #define SPDK_CONFIG_SHARED 1 00:11:15.010 #undef SPDK_CONFIG_SMA 00:11:15.010 #define SPDK_CONFIG_TESTS 1 00:11:15.010 #undef SPDK_CONFIG_TSAN 00:11:15.010 #define SPDK_CONFIG_UBLK 1 00:11:15.010 #define SPDK_CONFIG_UBSAN 1 00:11:15.010 #undef SPDK_CONFIG_UNIT_TESTS 00:11:15.010 #undef SPDK_CONFIG_URING 00:11:15.010 #define SPDK_CONFIG_URING_PATH 00:11:15.010 #undef SPDK_CONFIG_URING_ZNS 00:11:15.010 #undef SPDK_CONFIG_USDT 00:11:15.010 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:15.010 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:15.010 #define SPDK_CONFIG_VFIO_USER 1 00:11:15.010 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:15.010 #define SPDK_CONFIG_VHOST 1 00:11:15.010 #define SPDK_CONFIG_VIRTIO 1 00:11:15.010 #undef SPDK_CONFIG_VTUNE 00:11:15.010 #define SPDK_CONFIG_VTUNE_DIR 00:11:15.010 #define SPDK_CONFIG_WERROR 1 00:11:15.010 #define SPDK_CONFIG_WPDK_DIR 00:11:15.010 #undef SPDK_CONFIG_XNVME 00:11:15.010 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.010 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:15.011 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:15.012 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 654478 ]] 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 654478 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:15.274 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.l8rI1P 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.l8rI1P/tests/target /tmp/spdk.l8rI1P 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=53495373824 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988528128 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=8493154304 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984232960 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375277568 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993924096 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=339968 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:15.275 * Looking for test storage... 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=53495373824 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=10707746816 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.275 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:15.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.276 --rc genhtml_branch_coverage=1 00:11:15.276 --rc genhtml_function_coverage=1 00:11:15.276 --rc genhtml_legend=1 00:11:15.276 --rc geninfo_all_blocks=1 00:11:15.276 --rc geninfo_unexecuted_blocks=1 00:11:15.276 00:11:15.276 ' 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:15.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.276 --rc genhtml_branch_coverage=1 00:11:15.276 --rc genhtml_function_coverage=1 00:11:15.276 --rc genhtml_legend=1 00:11:15.276 --rc geninfo_all_blocks=1 00:11:15.276 --rc geninfo_unexecuted_blocks=1 00:11:15.276 00:11:15.276 ' 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:15.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.276 --rc genhtml_branch_coverage=1 00:11:15.276 --rc genhtml_function_coverage=1 00:11:15.276 --rc genhtml_legend=1 00:11:15.276 --rc geninfo_all_blocks=1 00:11:15.276 --rc geninfo_unexecuted_blocks=1 00:11:15.276 00:11:15.276 ' 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:15.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.276 --rc genhtml_branch_coverage=1 00:11:15.276 --rc genhtml_function_coverage=1 00:11:15.276 --rc genhtml_legend=1 00:11:15.276 --rc geninfo_all_blocks=1 00:11:15.276 --rc geninfo_unexecuted_blocks=1 00:11:15.276 00:11:15.276 ' 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:15.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:15.276 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:15.277 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.277 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.277 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.277 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:15.277 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:15.277 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.277 18:32:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:17.814 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:17.814 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:17.814 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:17.814 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.814 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:11:17.814 00:11:17.814 --- 10.0.0.2 ping statistics --- 00:11:17.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.814 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:11:17.814 00:11:17.814 --- 10.0.0.1 ping statistics --- 00:11:17.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.814 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.814 ************************************ 00:11:17.814 START TEST nvmf_filesystem_no_in_capsule 00:11:17.814 ************************************ 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=656117 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.814 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 656117 00:11:17.815 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 656117 ']' 00:11:17.815 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.815 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.815 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.815 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.815 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.815 [2024-11-17 18:32:04.145135] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:17.815 [2024-11-17 18:32:04.145213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.815 [2024-11-17 18:32:04.218425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.815 [2024-11-17 18:32:04.268342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.815 [2024-11-17 18:32:04.268396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.815 [2024-11-17 18:32:04.268409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.815 [2024-11-17 18:32:04.268420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.815 [2024-11-17 18:32:04.268430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.815 [2024-11-17 18:32:04.270094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.815 [2024-11-17 18:32:04.270158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.815 [2024-11-17 18:32:04.270189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.815 [2024-11-17 18:32:04.270192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.118 [2024-11-17 18:32:04.459586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.118 Malloc1 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.118 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.118 [2024-11-17 18:32:04.645436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:18.119 { 00:11:18.119 "name": "Malloc1", 00:11:18.119 "aliases": [ 00:11:18.119 "6113bcf1-bb65-4288-b1ef-b0c2442160b5" 00:11:18.119 ], 00:11:18.119 "product_name": "Malloc disk", 00:11:18.119 "block_size": 512, 00:11:18.119 "num_blocks": 1048576, 00:11:18.119 "uuid": "6113bcf1-bb65-4288-b1ef-b0c2442160b5", 00:11:18.119 "assigned_rate_limits": { 00:11:18.119 "rw_ios_per_sec": 0, 00:11:18.119 "rw_mbytes_per_sec": 0, 00:11:18.119 "r_mbytes_per_sec": 0, 00:11:18.119 "w_mbytes_per_sec": 0 00:11:18.119 }, 00:11:18.119 "claimed": true, 00:11:18.119 "claim_type": "exclusive_write", 00:11:18.119 "zoned": false, 00:11:18.119 "supported_io_types": { 00:11:18.119 "read": true, 00:11:18.119 "write": true, 00:11:18.119 "unmap": true, 00:11:18.119 "flush": true, 00:11:18.119 "reset": true, 00:11:18.119 "nvme_admin": false, 00:11:18.119 "nvme_io": false, 00:11:18.119 "nvme_io_md": false, 00:11:18.119 "write_zeroes": true, 00:11:18.119 "zcopy": true, 00:11:18.119 "get_zone_info": false, 00:11:18.119 "zone_management": false, 00:11:18.119 "zone_append": false, 00:11:18.119 "compare": false, 00:11:18.119 "compare_and_write": false, 00:11:18.119 "abort": true, 00:11:18.119 "seek_hole": false, 00:11:18.119 "seek_data": false, 00:11:18.119 "copy": true, 00:11:18.119 "nvme_iov_md": false 00:11:18.119 }, 00:11:18.119 "memory_domains": [ 00:11:18.119 { 00:11:18.119 "dma_device_id": "system", 00:11:18.119 "dma_device_type": 1 00:11:18.119 }, 00:11:18.119 { 00:11:18.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.119 "dma_device_type": 2 00:11:18.119 } 00:11:18.119 ], 00:11:18.119 "driver_specific": {} 00:11:18.119 } 00:11:18.119 ]' 00:11:18.119 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:18.403 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:18.403 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:18.403 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:18.403 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:18.403 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:18.404 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:18.404 18:32:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.969 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.969 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:18.969 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.969 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:18.970 18:32:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:20.868 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:21.125 18:32:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:21.690 18:32:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.065 ************************************ 00:11:23.065 START TEST filesystem_ext4 00:11:23.065 ************************************ 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:23.065 mke2fs 1.47.0 (5-Feb-2023) 00:11:23.065 Discarding device blocks: 0/522240 done 00:11:23.065 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:23.065 Filesystem UUID: f8ddeb05-9884-4ace-9c3c-f8c6162af86e 00:11:23.065 Superblock backups stored on blocks: 00:11:23.065 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:23.065 00:11:23.065 Allocating group tables: 0/64 done 00:11:23.065 Writing inode tables: 0/64 done 00:11:23.065 Creating journal (8192 blocks): done 00:11:23.065 Writing superblocks and filesystem accounting information: 0/64 done 00:11:23.065 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:23.065 18:32:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:28.325 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.325 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:28.325 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.325 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:28.325 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:28.325 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.325 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 656117 00:11:28.325 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.325 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.325 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.326 00:11:28.326 real 0m5.534s 00:11:28.326 user 0m0.016s 00:11:28.326 sys 0m0.064s 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:28.326 ************************************ 00:11:28.326 END TEST filesystem_ext4 00:11:28.326 ************************************ 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.326 ************************************ 00:11:28.326 START TEST filesystem_btrfs 00:11:28.326 ************************************ 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:28.326 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:28.890 btrfs-progs v6.8.1 00:11:28.890 See https://btrfs.readthedocs.io for more information. 00:11:28.890 00:11:28.890 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:28.890 NOTE: several default settings have changed in version 5.15, please make sure 00:11:28.890 this does not affect your deployments: 00:11:28.890 - DUP for metadata (-m dup) 00:11:28.890 - enabled no-holes (-O no-holes) 00:11:28.890 - enabled free-space-tree (-R free-space-tree) 00:11:28.890 00:11:28.890 Label: (null) 00:11:28.890 UUID: 5e9ebc70-d9c8-4ead-b2e2-e97ac1f0e040 00:11:28.890 Node size: 16384 00:11:28.890 Sector size: 4096 (CPU page size: 4096) 00:11:28.890 Filesystem size: 510.00MiB 00:11:28.890 Block group profiles: 00:11:28.890 Data: single 8.00MiB 00:11:28.890 Metadata: DUP 32.00MiB 00:11:28.890 System: DUP 8.00MiB 00:11:28.890 SSD detected: yes 00:11:28.890 Zoned device: no 00:11:28.890 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:28.890 Checksum: crc32c 00:11:28.890 Number of devices: 1 00:11:28.890 Devices: 00:11:28.890 ID SIZE PATH 00:11:28.890 1 510.00MiB /dev/nvme0n1p1 00:11:28.890 00:11:28.890 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:28.890 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 656117 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:29.147 00:11:29.147 real 0m0.824s 00:11:29.147 user 0m0.026s 00:11:29.147 sys 0m0.092s 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.147 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:29.147 ************************************ 00:11:29.147 END TEST filesystem_btrfs 00:11:29.148 ************************************ 00:11:29.148 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:29.148 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:29.148 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.148 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.405 ************************************ 00:11:29.405 START TEST filesystem_xfs 00:11:29.405 ************************************ 00:11:29.405 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:29.405 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:29.405 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:29.405 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:29.405 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:29.405 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:29.405 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:29.405 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:29.405 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:29.405 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:29.405 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:29.405 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:29.405 = sectsz=512 attr=2, projid32bit=1 00:11:29.405 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:29.405 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:29.405 data = bsize=4096 blocks=130560, imaxpct=25 00:11:29.405 = sunit=0 swidth=0 blks 00:11:29.405 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:29.405 log =internal log bsize=4096 blocks=16384, version=2 00:11:29.405 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:29.405 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:29.969 Discarding blocks...Done. 00:11:29.969 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:29.969 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.493 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.493 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:32.493 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.493 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:32.493 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:32.493 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.493 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 656117 00:11:32.493 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.493 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.493 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.493 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.493 00:11:32.493 real 0m3.277s 00:11:32.493 user 0m0.016s 00:11:32.493 sys 0m0.066s 00:11:32.493 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.493 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.493 ************************************ 00:11:32.493 END TEST filesystem_xfs 00:11:32.493 ************************************ 00:11:32.493 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 656117 00:11:32.751 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 656117 ']' 00:11:32.752 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 656117 00:11:32.752 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:32.752 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.752 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 656117 00:11:32.752 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.752 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.752 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 656117' 00:11:32.752 killing process with pid 656117 00:11:32.752 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 656117 00:11:32.752 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 656117 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:33.317 00:11:33.317 real 0m15.574s 00:11:33.317 user 1m0.327s 00:11:33.317 sys 0m2.066s 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.317 ************************************ 00:11:33.317 END TEST nvmf_filesystem_no_in_capsule 00:11:33.317 ************************************ 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.317 ************************************ 00:11:33.317 START TEST nvmf_filesystem_in_capsule 00:11:33.317 ************************************ 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=658215 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 658215 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 658215 ']' 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.317 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.318 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.318 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.318 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.318 [2024-11-17 18:32:19.771001] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:33.318 [2024-11-17 18:32:19.771108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.318 [2024-11-17 18:32:19.842149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.318 [2024-11-17 18:32:19.884475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.318 [2024-11-17 18:32:19.884531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.318 [2024-11-17 18:32:19.884559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.318 [2024-11-17 18:32:19.884571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.318 [2024-11-17 18:32:19.884582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.318 [2024-11-17 18:32:19.886100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.318 [2024-11-17 18:32:19.886162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.318 [2024-11-17 18:32:19.886230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.318 [2024-11-17 18:32:19.886233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.577 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.577 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:33.577 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.577 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.577 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.577 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.577 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:33.577 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:33.577 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.577 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.577 [2024-11-17 18:32:20.028325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.577 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.577 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:33.577 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.577 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.835 Malloc1 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.835 [2024-11-17 18:32:20.217553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.835 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.836 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:33.836 { 00:11:33.836 "name": "Malloc1", 00:11:33.836 "aliases": [ 00:11:33.836 "7fc749a7-9536-4586-8f1e-d558484e75da" 00:11:33.836 ], 00:11:33.836 "product_name": "Malloc disk", 00:11:33.836 "block_size": 512, 00:11:33.836 "num_blocks": 1048576, 00:11:33.836 "uuid": "7fc749a7-9536-4586-8f1e-d558484e75da", 00:11:33.836 "assigned_rate_limits": { 00:11:33.836 "rw_ios_per_sec": 0, 00:11:33.836 "rw_mbytes_per_sec": 0, 00:11:33.836 "r_mbytes_per_sec": 0, 00:11:33.836 "w_mbytes_per_sec": 0 00:11:33.836 }, 00:11:33.836 "claimed": true, 00:11:33.836 "claim_type": "exclusive_write", 00:11:33.836 "zoned": false, 00:11:33.836 "supported_io_types": { 00:11:33.836 "read": true, 00:11:33.836 "write": true, 00:11:33.836 "unmap": true, 00:11:33.836 "flush": true, 00:11:33.836 "reset": true, 00:11:33.836 "nvme_admin": false, 00:11:33.836 "nvme_io": false, 00:11:33.836 "nvme_io_md": false, 00:11:33.836 "write_zeroes": true, 00:11:33.836 "zcopy": true, 00:11:33.836 "get_zone_info": false, 00:11:33.836 "zone_management": false, 00:11:33.836 "zone_append": false, 00:11:33.836 "compare": false, 00:11:33.836 "compare_and_write": false, 00:11:33.836 "abort": true, 00:11:33.836 "seek_hole": false, 00:11:33.836 "seek_data": false, 00:11:33.836 "copy": true, 00:11:33.836 "nvme_iov_md": false 00:11:33.836 }, 00:11:33.836 "memory_domains": [ 00:11:33.836 { 00:11:33.836 "dma_device_id": "system", 00:11:33.836 "dma_device_type": 1 00:11:33.836 }, 00:11:33.836 { 00:11:33.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.836 "dma_device_type": 2 00:11:33.836 } 00:11:33.836 ], 00:11:33.836 "driver_specific": {} 00:11:33.836 } 00:11:33.836 ]' 00:11:33.836 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:33.836 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:33.836 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:33.836 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:33.836 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:33.836 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:33.836 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:33.836 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.402 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.402 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.402 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.402 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:34.402 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:36.929 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:36.929 18:32:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:37.862 18:32:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.796 ************************************ 00:11:38.796 START TEST filesystem_in_capsule_ext4 00:11:38.796 ************************************ 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:38.796 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:38.796 mke2fs 1.47.0 (5-Feb-2023) 00:11:39.054 Discarding device blocks: 0/522240 done 00:11:39.054 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:39.054 Filesystem UUID: 1da1ee43-72d8-4c22-8fa3-d25662721892 00:11:39.054 Superblock backups stored on blocks: 00:11:39.054 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:39.054 00:11:39.054 Allocating group tables: 0/64 done 00:11:39.054 Writing inode tables: 0/64 done 00:11:39.054 Creating journal (8192 blocks): done 00:11:39.054 Writing superblocks and filesystem accounting information: 0/64 done 00:11:39.054 00:11:39.054 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:39.054 18:32:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.315 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.315 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:44.315 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.315 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:44.315 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:44.315 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.315 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 658215 00:11:44.315 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.315 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.573 00:11:44.573 real 0m5.609s 00:11:44.573 user 0m0.015s 00:11:44.573 sys 0m0.055s 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:44.573 ************************************ 00:11:44.573 END TEST filesystem_in_capsule_ext4 00:11:44.573 ************************************ 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.573 ************************************ 00:11:44.573 START TEST filesystem_in_capsule_btrfs 00:11:44.573 ************************************ 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:44.573 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:44.574 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:44.574 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:44.574 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:44.574 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:44.574 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:44.574 18:32:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:44.574 btrfs-progs v6.8.1 00:11:44.574 See https://btrfs.readthedocs.io for more information. 00:11:44.574 00:11:44.574 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:44.574 NOTE: several default settings have changed in version 5.15, please make sure 00:11:44.574 this does not affect your deployments: 00:11:44.574 - DUP for metadata (-m dup) 00:11:44.574 - enabled no-holes (-O no-holes) 00:11:44.574 - enabled free-space-tree (-R free-space-tree) 00:11:44.574 00:11:44.574 Label: (null) 00:11:44.574 UUID: 2cd8ae10-dcba-4d6f-98e5-1732a6660308 00:11:44.574 Node size: 16384 00:11:44.574 Sector size: 4096 (CPU page size: 4096) 00:11:44.574 Filesystem size: 510.00MiB 00:11:44.574 Block group profiles: 00:11:44.574 Data: single 8.00MiB 00:11:44.574 Metadata: DUP 32.00MiB 00:11:44.574 System: DUP 8.00MiB 00:11:44.574 SSD detected: yes 00:11:44.574 Zoned device: no 00:11:44.574 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:44.574 Checksum: crc32c 00:11:44.574 Number of devices: 1 00:11:44.574 Devices: 00:11:44.574 ID SIZE PATH 00:11:44.574 1 510.00MiB /dev/nvme0n1p1 00:11:44.574 00:11:44.574 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:44.574 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 658215 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.140 00:11:45.140 real 0m0.574s 00:11:45.140 user 0m0.015s 00:11:45.140 sys 0m0.108s 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:45.140 ************************************ 00:11:45.140 END TEST filesystem_in_capsule_btrfs 00:11:45.140 ************************************ 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.140 ************************************ 00:11:45.140 START TEST filesystem_in_capsule_xfs 00:11:45.140 ************************************ 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:45.140 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:45.141 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:45.141 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:45.141 18:32:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:45.141 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:45.141 = sectsz=512 attr=2, projid32bit=1 00:11:45.141 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:45.141 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:45.141 data = bsize=4096 blocks=130560, imaxpct=25 00:11:45.141 = sunit=0 swidth=0 blks 00:11:45.141 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:45.141 log =internal log bsize=4096 blocks=16384, version=2 00:11:45.141 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:45.141 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:46.074 Discarding blocks...Done. 00:11:46.074 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:46.074 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.600 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 658215 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.600 00:11:48.600 real 0m3.515s 00:11:48.600 user 0m0.017s 00:11:48.600 sys 0m0.064s 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:48.600 ************************************ 00:11:48.600 END TEST filesystem_in_capsule_xfs 00:11:48.600 ************************************ 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:48.600 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 658215 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 658215 ']' 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 658215 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 658215 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 658215' 00:11:48.858 killing process with pid 658215 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 658215 00:11:48.858 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 658215 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:49.425 00:11:49.425 real 0m16.002s 00:11:49.425 user 1m2.052s 00:11:49.425 sys 0m2.006s 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.425 ************************************ 00:11:49.425 END TEST nvmf_filesystem_in_capsule 00:11:49.425 ************************************ 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.425 rmmod nvme_tcp 00:11:49.425 rmmod nvme_fabrics 00:11:49.425 rmmod nvme_keyring 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.425 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:49.426 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:49.426 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:49.426 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:49.426 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:49.426 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:49.426 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:49.426 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:49.426 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.426 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.426 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.333 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:51.333 00:11:51.333 real 0m36.509s 00:11:51.333 user 2m3.550s 00:11:51.333 sys 0m5.856s 00:11:51.333 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.333 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.333 ************************************ 00:11:51.333 END TEST nvmf_filesystem 00:11:51.333 ************************************ 00:11:51.333 18:32:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:51.333 18:32:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.333 18:32:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.333 18:32:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:51.333 ************************************ 00:11:51.333 START TEST nvmf_target_discovery 00:11:51.333 ************************************ 00:11:51.333 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:51.590 * Looking for test storage... 00:11:51.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.590 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:51.590 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:51.590 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:51.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.590 --rc genhtml_branch_coverage=1 00:11:51.590 --rc genhtml_function_coverage=1 00:11:51.590 --rc genhtml_legend=1 00:11:51.590 --rc geninfo_all_blocks=1 00:11:51.590 --rc geninfo_unexecuted_blocks=1 00:11:51.590 00:11:51.590 ' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:51.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.590 --rc genhtml_branch_coverage=1 00:11:51.590 --rc genhtml_function_coverage=1 00:11:51.590 --rc genhtml_legend=1 00:11:51.590 --rc geninfo_all_blocks=1 00:11:51.590 --rc geninfo_unexecuted_blocks=1 00:11:51.590 00:11:51.590 ' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:51.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.590 --rc genhtml_branch_coverage=1 00:11:51.590 --rc genhtml_function_coverage=1 00:11:51.590 --rc genhtml_legend=1 00:11:51.590 --rc geninfo_all_blocks=1 00:11:51.590 --rc geninfo_unexecuted_blocks=1 00:11:51.590 00:11:51.590 ' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:51.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.590 --rc genhtml_branch_coverage=1 00:11:51.590 --rc genhtml_function_coverage=1 00:11:51.590 --rc genhtml_legend=1 00:11:51.590 --rc geninfo_all_blocks=1 00:11:51.590 --rc geninfo_unexecuted_blocks=1 00:11:51.590 00:11:51.590 ' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:51.590 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:54.123 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:54.124 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:54.124 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:54.124 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:54.124 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:54.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:11:54.124 00:11:54.124 --- 10.0.0.2 ping statistics --- 00:11:54.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.124 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:11:54.124 00:11:54.124 --- 10.0.0.1 ping statistics --- 00:11:54.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.124 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=662237 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 662237 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 662237 ']' 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.124 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.124 [2024-11-17 18:32:40.431869] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:54.125 [2024-11-17 18:32:40.431977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.125 [2024-11-17 18:32:40.506327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.125 [2024-11-17 18:32:40.549740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.125 [2024-11-17 18:32:40.549791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.125 [2024-11-17 18:32:40.549813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.125 [2024-11-17 18:32:40.549825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.125 [2024-11-17 18:32:40.549834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.125 [2024-11-17 18:32:40.551257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.125 [2024-11-17 18:32:40.551374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.125 [2024-11-17 18:32:40.551472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.125 [2024-11-17 18:32:40.551479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.125 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.125 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:54.125 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.125 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.125 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.125 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.125 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.125 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.125 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.383 [2024-11-17 18:32:40.701841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.383 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.383 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 Null1 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 [2024-11-17 18:32:40.742168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 Null2 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 Null3 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 Null4 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.384 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:54.643 00:11:54.643 Discovery Log Number of Records 6, Generation counter 6 00:11:54.643 =====Discovery Log Entry 0====== 00:11:54.643 trtype: tcp 00:11:54.643 adrfam: ipv4 00:11:54.643 subtype: current discovery subsystem 00:11:54.643 treq: not required 00:11:54.643 portid: 0 00:11:54.643 trsvcid: 4420 00:11:54.643 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:54.643 traddr: 10.0.0.2 00:11:54.643 eflags: explicit discovery connections, duplicate discovery information 00:11:54.643 sectype: none 00:11:54.643 =====Discovery Log Entry 1====== 00:11:54.643 trtype: tcp 00:11:54.643 adrfam: ipv4 00:11:54.643 subtype: nvme subsystem 00:11:54.643 treq: not required 00:11:54.643 portid: 0 00:11:54.643 trsvcid: 4420 00:11:54.643 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:54.643 traddr: 10.0.0.2 00:11:54.643 eflags: none 00:11:54.643 sectype: none 00:11:54.643 =====Discovery Log Entry 2====== 00:11:54.643 trtype: tcp 00:11:54.643 adrfam: ipv4 00:11:54.643 subtype: nvme subsystem 00:11:54.643 treq: not required 00:11:54.643 portid: 0 00:11:54.643 trsvcid: 4420 00:11:54.643 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:54.643 traddr: 10.0.0.2 00:11:54.643 eflags: none 00:11:54.643 sectype: none 00:11:54.643 =====Discovery Log Entry 3====== 00:11:54.643 trtype: tcp 00:11:54.643 adrfam: ipv4 00:11:54.643 subtype: nvme subsystem 00:11:54.643 treq: not required 00:11:54.643 portid: 0 00:11:54.643 trsvcid: 4420 00:11:54.643 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:54.643 traddr: 10.0.0.2 00:11:54.643 eflags: none 00:11:54.643 sectype: none 00:11:54.643 =====Discovery Log Entry 4====== 00:11:54.643 trtype: tcp 00:11:54.643 adrfam: ipv4 00:11:54.643 subtype: nvme subsystem 00:11:54.643 treq: not required 00:11:54.643 portid: 0 00:11:54.643 trsvcid: 4420 00:11:54.643 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:54.643 traddr: 10.0.0.2 00:11:54.643 eflags: none 00:11:54.643 sectype: none 00:11:54.643 =====Discovery Log Entry 5====== 00:11:54.643 trtype: tcp 00:11:54.643 adrfam: ipv4 00:11:54.643 subtype: discovery subsystem referral 00:11:54.643 treq: not required 00:11:54.643 portid: 0 00:11:54.643 trsvcid: 4430 00:11:54.643 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:54.643 traddr: 10.0.0.2 00:11:54.643 eflags: none 00:11:54.643 sectype: none 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:54.643 Perform nvmf subsystem discovery via RPC 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.643 [ 00:11:54.643 { 00:11:54.643 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:54.643 "subtype": "Discovery", 00:11:54.643 "listen_addresses": [ 00:11:54.643 { 00:11:54.643 "trtype": "TCP", 00:11:54.643 "adrfam": "IPv4", 00:11:54.643 "traddr": "10.0.0.2", 00:11:54.643 "trsvcid": "4420" 00:11:54.643 } 00:11:54.643 ], 00:11:54.643 "allow_any_host": true, 00:11:54.643 "hosts": [] 00:11:54.643 }, 00:11:54.643 { 00:11:54.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.643 "subtype": "NVMe", 00:11:54.643 "listen_addresses": [ 00:11:54.643 { 00:11:54.643 "trtype": "TCP", 00:11:54.643 "adrfam": "IPv4", 00:11:54.643 "traddr": "10.0.0.2", 00:11:54.643 "trsvcid": "4420" 00:11:54.643 } 00:11:54.643 ], 00:11:54.643 "allow_any_host": true, 00:11:54.643 "hosts": [], 00:11:54.643 "serial_number": "SPDK00000000000001", 00:11:54.643 "model_number": "SPDK bdev Controller", 00:11:54.643 "max_namespaces": 32, 00:11:54.643 "min_cntlid": 1, 00:11:54.643 "max_cntlid": 65519, 00:11:54.643 "namespaces": [ 00:11:54.643 { 00:11:54.643 "nsid": 1, 00:11:54.643 "bdev_name": "Null1", 00:11:54.643 "name": "Null1", 00:11:54.643 "nguid": "99B4118E5A2F43A7B605C7C55B8BECC6", 00:11:54.643 "uuid": "99b4118e-5a2f-43a7-b605-c7c55b8becc6" 00:11:54.643 } 00:11:54.643 ] 00:11:54.643 }, 00:11:54.643 { 00:11:54.643 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:54.643 "subtype": "NVMe", 00:11:54.643 "listen_addresses": [ 00:11:54.643 { 00:11:54.643 "trtype": "TCP", 00:11:54.643 "adrfam": "IPv4", 00:11:54.643 "traddr": "10.0.0.2", 00:11:54.643 "trsvcid": "4420" 00:11:54.643 } 00:11:54.643 ], 00:11:54.643 "allow_any_host": true, 00:11:54.643 "hosts": [], 00:11:54.643 "serial_number": "SPDK00000000000002", 00:11:54.643 "model_number": "SPDK bdev Controller", 00:11:54.643 "max_namespaces": 32, 00:11:54.643 "min_cntlid": 1, 00:11:54.643 "max_cntlid": 65519, 00:11:54.643 "namespaces": [ 00:11:54.643 { 00:11:54.643 "nsid": 1, 00:11:54.643 "bdev_name": "Null2", 00:11:54.643 "name": "Null2", 00:11:54.643 "nguid": "66B8FF495A5742538C7FBD8B572840B2", 00:11:54.643 "uuid": "66b8ff49-5a57-4253-8c7f-bd8b572840b2" 00:11:54.643 } 00:11:54.643 ] 00:11:54.643 }, 00:11:54.643 { 00:11:54.643 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:54.643 "subtype": "NVMe", 00:11:54.643 "listen_addresses": [ 00:11:54.643 { 00:11:54.643 "trtype": "TCP", 00:11:54.643 "adrfam": "IPv4", 00:11:54.643 "traddr": "10.0.0.2", 00:11:54.643 "trsvcid": "4420" 00:11:54.643 } 00:11:54.643 ], 00:11:54.643 "allow_any_host": true, 00:11:54.643 "hosts": [], 00:11:54.643 "serial_number": "SPDK00000000000003", 00:11:54.643 "model_number": "SPDK bdev Controller", 00:11:54.643 "max_namespaces": 32, 00:11:54.643 "min_cntlid": 1, 00:11:54.643 "max_cntlid": 65519, 00:11:54.643 "namespaces": [ 00:11:54.643 { 00:11:54.643 "nsid": 1, 00:11:54.643 "bdev_name": "Null3", 00:11:54.643 "name": "Null3", 00:11:54.643 "nguid": "B41CA6C8A16E414CBB194F7A3EC735F0", 00:11:54.643 "uuid": "b41ca6c8-a16e-414c-bb19-4f7a3ec735f0" 00:11:54.643 } 00:11:54.643 ] 00:11:54.643 }, 00:11:54.643 { 00:11:54.643 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:54.643 "subtype": "NVMe", 00:11:54.643 "listen_addresses": [ 00:11:54.643 { 00:11:54.643 "trtype": "TCP", 00:11:54.643 "adrfam": "IPv4", 00:11:54.643 "traddr": "10.0.0.2", 00:11:54.643 "trsvcid": "4420" 00:11:54.643 } 00:11:54.643 ], 00:11:54.643 "allow_any_host": true, 00:11:54.643 "hosts": [], 00:11:54.643 "serial_number": "SPDK00000000000004", 00:11:54.643 "model_number": "SPDK bdev Controller", 00:11:54.643 "max_namespaces": 32, 00:11:54.643 "min_cntlid": 1, 00:11:54.643 "max_cntlid": 65519, 00:11:54.643 "namespaces": [ 00:11:54.643 { 00:11:54.643 "nsid": 1, 00:11:54.643 "bdev_name": "Null4", 00:11:54.643 "name": "Null4", 00:11:54.643 "nguid": "D537802D5CB641BF90A2676FC6688438", 00:11:54.643 "uuid": "d537802d-5cb6-41bf-90a2-676fc6688438" 00:11:54.643 } 00:11:54.643 ] 00:11:54.643 } 00:11:54.643 ] 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.643 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.644 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.644 rmmod nvme_tcp 00:11:54.902 rmmod nvme_fabrics 00:11:54.902 rmmod nvme_keyring 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 662237 ']' 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 662237 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 662237 ']' 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 662237 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 662237 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 662237' 00:11:54.902 killing process with pid 662237 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 662237 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 662237 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.902 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.162 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.162 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.162 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.162 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.162 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.066 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.066 00:11:57.066 real 0m5.629s 00:11:57.066 user 0m4.704s 00:11:57.066 sys 0m1.962s 00:11:57.066 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.066 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:57.066 ************************************ 00:11:57.066 END TEST nvmf_target_discovery 00:11:57.066 ************************************ 00:11:57.066 18:32:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:57.066 18:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:57.066 18:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.066 18:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.066 ************************************ 00:11:57.066 START TEST nvmf_referrals 00:11:57.066 ************************************ 00:11:57.066 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:57.066 * Looking for test storage... 00:11:57.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.066 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:57.067 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:57.067 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:57.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.327 --rc genhtml_branch_coverage=1 00:11:57.327 --rc genhtml_function_coverage=1 00:11:57.327 --rc genhtml_legend=1 00:11:57.327 --rc geninfo_all_blocks=1 00:11:57.327 --rc geninfo_unexecuted_blocks=1 00:11:57.327 00:11:57.327 ' 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:57.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.327 --rc genhtml_branch_coverage=1 00:11:57.327 --rc genhtml_function_coverage=1 00:11:57.327 --rc genhtml_legend=1 00:11:57.327 --rc geninfo_all_blocks=1 00:11:57.327 --rc geninfo_unexecuted_blocks=1 00:11:57.327 00:11:57.327 ' 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:57.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.327 --rc genhtml_branch_coverage=1 00:11:57.327 --rc genhtml_function_coverage=1 00:11:57.327 --rc genhtml_legend=1 00:11:57.327 --rc geninfo_all_blocks=1 00:11:57.327 --rc geninfo_unexecuted_blocks=1 00:11:57.327 00:11:57.327 ' 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:57.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.327 --rc genhtml_branch_coverage=1 00:11:57.327 --rc genhtml_function_coverage=1 00:11:57.327 --rc genhtml_legend=1 00:11:57.327 --rc geninfo_all_blocks=1 00:11:57.327 --rc geninfo_unexecuted_blocks=1 00:11:57.327 00:11:57.327 ' 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.327 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:57.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:57.328 18:32:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:59.952 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:59.952 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:59.952 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:59.952 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.952 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.953 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.953 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.953 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.953 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.953 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.953 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.953 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.953 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.953 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.953 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.953 18:32:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:11:59.953 00:11:59.953 --- 10.0.0.2 ping statistics --- 00:11:59.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.953 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:11:59.953 00:11:59.953 --- 10.0.0.1 ping statistics --- 00:11:59.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.953 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=664339 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 664339 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 664339 ']' 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 [2024-11-17 18:32:46.194334] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:11:59.953 [2024-11-17 18:32:46.194414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.953 [2024-11-17 18:32:46.271566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.953 [2024-11-17 18:32:46.320636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.953 [2024-11-17 18:32:46.320728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.953 [2024-11-17 18:32:46.320743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.953 [2024-11-17 18:32:46.320754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.953 [2024-11-17 18:32:46.320764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.953 [2024-11-17 18:32:46.322286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.953 [2024-11-17 18:32:46.322369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.953 [2024-11-17 18:32:46.322372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.953 [2024-11-17 18:32:46.322309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 [2024-11-17 18:32:46.470587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 [2024-11-17 18:32:46.482856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.953 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.212 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.469 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.470 18:32:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.727 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.985 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:00.985 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:00.985 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:00.986 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:00.986 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:00.986 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.986 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:00.986 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:00.986 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:00.986 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:00.986 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:00.986 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.986 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:01.243 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:01.243 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:01.243 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.243 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.243 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.243 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:01.243 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:01.243 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.243 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:01.243 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.243 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.244 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:01.244 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.244 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:01.244 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:01.244 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:01.244 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.244 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.244 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.244 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.244 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.501 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:01.501 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:01.501 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:01.501 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:01.501 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:01.501 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.501 18:32:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:01.759 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:01.759 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:01.759 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:01.759 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:01.759 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.759 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:01.759 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:01.759 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:01.759 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.759 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.759 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:02.017 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.275 rmmod nvme_tcp 00:12:02.275 rmmod nvme_fabrics 00:12:02.275 rmmod nvme_keyring 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 664339 ']' 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 664339 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 664339 ']' 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 664339 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 664339 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 664339' 00:12:02.275 killing process with pid 664339 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 664339 00:12:02.275 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 664339 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.534 18:32:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.442 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:04.442 00:12:04.442 real 0m7.355s 00:12:04.442 user 0m11.794s 00:12:04.442 sys 0m2.389s 00:12:04.443 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.443 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.443 ************************************ 00:12:04.443 END TEST nvmf_referrals 00:12:04.443 ************************************ 00:12:04.443 18:32:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:04.443 18:32:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.443 18:32:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.443 18:32:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:04.443 ************************************ 00:12:04.443 START TEST nvmf_connect_disconnect 00:12:04.443 ************************************ 00:12:04.443 18:32:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:04.703 * Looking for test storage... 00:12:04.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:04.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.703 --rc genhtml_branch_coverage=1 00:12:04.703 --rc genhtml_function_coverage=1 00:12:04.703 --rc genhtml_legend=1 00:12:04.703 --rc geninfo_all_blocks=1 00:12:04.703 --rc geninfo_unexecuted_blocks=1 00:12:04.703 00:12:04.703 ' 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:04.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.703 --rc genhtml_branch_coverage=1 00:12:04.703 --rc genhtml_function_coverage=1 00:12:04.703 --rc genhtml_legend=1 00:12:04.703 --rc geninfo_all_blocks=1 00:12:04.703 --rc geninfo_unexecuted_blocks=1 00:12:04.703 00:12:04.703 ' 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:04.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.703 --rc genhtml_branch_coverage=1 00:12:04.703 --rc genhtml_function_coverage=1 00:12:04.703 --rc genhtml_legend=1 00:12:04.703 --rc geninfo_all_blocks=1 00:12:04.703 --rc geninfo_unexecuted_blocks=1 00:12:04.703 00:12:04.703 ' 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:04.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.703 --rc genhtml_branch_coverage=1 00:12:04.703 --rc genhtml_function_coverage=1 00:12:04.703 --rc genhtml_legend=1 00:12:04.703 --rc geninfo_all_blocks=1 00:12:04.703 --rc geninfo_unexecuted_blocks=1 00:12:04.703 00:12:04.703 ' 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:04.703 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:04.704 18:32:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:07.238 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.238 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:07.238 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:07.239 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:07.239 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:12:07.239 00:12:07.239 --- 10.0.0.2 ping statistics --- 00:12:07.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.239 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:12:07.239 00:12:07.239 --- 10.0.0.1 ping statistics --- 00:12:07.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.239 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=666664 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 666664 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 666664 ']' 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.239 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.239 [2024-11-17 18:32:53.700577] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:12:07.239 [2024-11-17 18:32:53.700666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.239 [2024-11-17 18:32:53.774685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.498 [2024-11-17 18:32:53.826982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.498 [2024-11-17 18:32:53.827032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.498 [2024-11-17 18:32:53.827046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.498 [2024-11-17 18:32:53.827057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.498 [2024-11-17 18:32:53.827074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.498 [2024-11-17 18:32:53.828580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.498 [2024-11-17 18:32:53.828645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.498 [2024-11-17 18:32:53.828698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.498 [2024-11-17 18:32:53.828701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.498 [2024-11-17 18:32:53.981538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.498 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.498 [2024-11-17 18:32:54.057140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:07.498 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:10.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.961 [2024-11-17 18:34:55.352461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b8f60 is same with the state(6) to be set 00:14:08.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:59.795 rmmod nvme_tcp 00:15:59.795 rmmod nvme_fabrics 00:15:59.795 rmmod nvme_keyring 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 666664 ']' 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 666664 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 666664 ']' 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 666664 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 666664 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 666664' 00:15:59.795 killing process with pid 666664 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 666664 00:15:59.795 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 666664 00:16:00.053 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:00.053 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:00.053 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:00.054 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:00.054 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:00.054 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:00.054 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:00.054 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:00.054 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:00.054 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.054 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.054 18:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:02.590 00:16:02.590 real 3m57.615s 00:16:02.590 user 15m5.625s 00:16:02.590 sys 0m34.443s 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:02.590 ************************************ 00:16:02.590 END TEST nvmf_connect_disconnect 00:16:02.590 ************************************ 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.590 ************************************ 00:16:02.590 START TEST nvmf_multitarget 00:16:02.590 ************************************ 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:02.590 * Looking for test storage... 00:16:02.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.590 --rc genhtml_branch_coverage=1 00:16:02.590 --rc genhtml_function_coverage=1 00:16:02.590 --rc genhtml_legend=1 00:16:02.590 --rc geninfo_all_blocks=1 00:16:02.590 --rc geninfo_unexecuted_blocks=1 00:16:02.590 00:16:02.590 ' 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.590 --rc genhtml_branch_coverage=1 00:16:02.590 --rc genhtml_function_coverage=1 00:16:02.590 --rc genhtml_legend=1 00:16:02.590 --rc geninfo_all_blocks=1 00:16:02.590 --rc geninfo_unexecuted_blocks=1 00:16:02.590 00:16:02.590 ' 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.590 --rc genhtml_branch_coverage=1 00:16:02.590 --rc genhtml_function_coverage=1 00:16:02.590 --rc genhtml_legend=1 00:16:02.590 --rc geninfo_all_blocks=1 00:16:02.590 --rc geninfo_unexecuted_blocks=1 00:16:02.590 00:16:02.590 ' 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.590 --rc genhtml_branch_coverage=1 00:16:02.590 --rc genhtml_function_coverage=1 00:16:02.590 --rc genhtml_legend=1 00:16:02.590 --rc geninfo_all_blocks=1 00:16:02.590 --rc geninfo_unexecuted_blocks=1 00:16:02.590 00:16:02.590 ' 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.590 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:02.591 18:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:04.494 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:04.494 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:04.494 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:04.494 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.494 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:04.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:16:04.754 00:16:04.754 --- 10.0.0.2 ping statistics --- 00:16:04.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.754 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:16:04.754 00:16:04.754 --- 10.0.0.1 ping statistics --- 00:16:04.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.754 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=698522 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 698522 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 698522 ']' 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.754 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:04.754 [2024-11-17 18:36:51.262466] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:04.754 [2024-11-17 18:36:51.262538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.012 [2024-11-17 18:36:51.332411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.012 [2024-11-17 18:36:51.375788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.012 [2024-11-17 18:36:51.375843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.012 [2024-11-17 18:36:51.375865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.012 [2024-11-17 18:36:51.375875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.012 [2024-11-17 18:36:51.375884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.012 [2024-11-17 18:36:51.377420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.012 [2024-11-17 18:36:51.377528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.012 [2024-11-17 18:36:51.377619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.012 [2024-11-17 18:36:51.377622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.012 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.012 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:05.013 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:05.013 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:05.013 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:05.013 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.013 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:05.013 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:05.013 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:05.271 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:05.271 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:05.271 "nvmf_tgt_1" 00:16:05.271 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:05.529 "nvmf_tgt_2" 00:16:05.529 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:05.529 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:05.529 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:05.529 18:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:05.529 true 00:16:05.529 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:05.786 true 00:16:05.787 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:05.787 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:05.787 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:05.787 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:05.787 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:05.787 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:05.787 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:05.787 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:05.787 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:05.787 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:05.787 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:05.787 rmmod nvme_tcp 00:16:05.787 rmmod nvme_fabrics 00:16:06.045 rmmod nvme_keyring 00:16:06.045 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:06.045 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:06.045 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:06.045 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 698522 ']' 00:16:06.045 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 698522 00:16:06.045 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 698522 ']' 00:16:06.045 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 698522 00:16:06.045 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:06.046 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.046 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 698522 00:16:06.046 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.046 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.046 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 698522' 00:16:06.046 killing process with pid 698522 00:16:06.046 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 698522 00:16:06.046 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 698522 00:16:06.046 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:06.046 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:06.046 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:06.046 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:06.306 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:06.306 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:06.306 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:06.306 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:06.306 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:06.306 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.306 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.306 18:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.211 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:08.211 00:16:08.211 real 0m6.009s 00:16:08.211 user 0m6.693s 00:16:08.211 sys 0m2.056s 00:16:08.211 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.211 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:08.211 ************************************ 00:16:08.211 END TEST nvmf_multitarget 00:16:08.211 ************************************ 00:16:08.211 18:36:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:08.211 18:36:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.211 18:36:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.211 18:36:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.211 ************************************ 00:16:08.211 START TEST nvmf_rpc 00:16:08.211 ************************************ 00:16:08.211 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:08.211 * Looking for test storage... 00:16:08.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.211 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:08.211 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:08.211 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:08.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.469 --rc genhtml_branch_coverage=1 00:16:08.469 --rc genhtml_function_coverage=1 00:16:08.469 --rc genhtml_legend=1 00:16:08.469 --rc geninfo_all_blocks=1 00:16:08.469 --rc geninfo_unexecuted_blocks=1 00:16:08.469 00:16:08.469 ' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:08.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.469 --rc genhtml_branch_coverage=1 00:16:08.469 --rc genhtml_function_coverage=1 00:16:08.469 --rc genhtml_legend=1 00:16:08.469 --rc geninfo_all_blocks=1 00:16:08.469 --rc geninfo_unexecuted_blocks=1 00:16:08.469 00:16:08.469 ' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:08.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.469 --rc genhtml_branch_coverage=1 00:16:08.469 --rc genhtml_function_coverage=1 00:16:08.469 --rc genhtml_legend=1 00:16:08.469 --rc geninfo_all_blocks=1 00:16:08.469 --rc geninfo_unexecuted_blocks=1 00:16:08.469 00:16:08.469 ' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:08.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.469 --rc genhtml_branch_coverage=1 00:16:08.469 --rc genhtml_function_coverage=1 00:16:08.469 --rc genhtml_legend=1 00:16:08.469 --rc geninfo_all_blocks=1 00:16:08.469 --rc geninfo_unexecuted_blocks=1 00:16:08.469 00:16:08.469 ' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:08.469 18:36:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:10.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:10.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:10.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:10.998 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:10.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:10.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:16:10.999 00:16:10.999 --- 10.0.0.2 ping statistics --- 00:16:10.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.999 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:16:10.999 00:16:10.999 --- 10.0.0.1 ping statistics --- 00:16:10.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.999 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=700638 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 700638 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 700638 ']' 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.999 [2024-11-17 18:36:57.262659] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:10.999 [2024-11-17 18:36:57.262764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.999 [2024-11-17 18:36:57.337339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.999 [2024-11-17 18:36:57.386125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.999 [2024-11-17 18:36:57.386180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.999 [2024-11-17 18:36:57.386194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.999 [2024-11-17 18:36:57.386205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.999 [2024-11-17 18:36:57.386214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.999 [2024-11-17 18:36:57.387799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.999 [2024-11-17 18:36:57.387859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.999 [2024-11-17 18:36:57.387927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.999 [2024-11-17 18:36:57.387930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.999 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:10.999 "tick_rate": 2700000000, 00:16:10.999 "poll_groups": [ 00:16:10.999 { 00:16:10.999 "name": "nvmf_tgt_poll_group_000", 00:16:10.999 "admin_qpairs": 0, 00:16:10.999 "io_qpairs": 0, 00:16:10.999 "current_admin_qpairs": 0, 00:16:10.999 "current_io_qpairs": 0, 00:16:10.999 "pending_bdev_io": 0, 00:16:10.999 "completed_nvme_io": 0, 00:16:10.999 "transports": [] 00:16:10.999 }, 00:16:10.999 { 00:16:10.999 "name": "nvmf_tgt_poll_group_001", 00:16:10.999 "admin_qpairs": 0, 00:16:10.999 "io_qpairs": 0, 00:16:10.999 "current_admin_qpairs": 0, 00:16:10.999 "current_io_qpairs": 0, 00:16:10.999 "pending_bdev_io": 0, 00:16:10.999 "completed_nvme_io": 0, 00:16:10.999 "transports": [] 00:16:10.999 }, 00:16:10.999 { 00:16:10.999 "name": "nvmf_tgt_poll_group_002", 00:16:10.999 "admin_qpairs": 0, 00:16:10.999 "io_qpairs": 0, 00:16:10.999 "current_admin_qpairs": 0, 00:16:10.999 "current_io_qpairs": 0, 00:16:10.999 "pending_bdev_io": 0, 00:16:10.999 "completed_nvme_io": 0, 00:16:10.999 "transports": [] 00:16:10.999 }, 00:16:11.000 { 00:16:11.000 "name": "nvmf_tgt_poll_group_003", 00:16:11.000 "admin_qpairs": 0, 00:16:11.000 "io_qpairs": 0, 00:16:11.000 "current_admin_qpairs": 0, 00:16:11.000 "current_io_qpairs": 0, 00:16:11.000 "pending_bdev_io": 0, 00:16:11.000 "completed_nvme_io": 0, 00:16:11.000 "transports": [] 00:16:11.000 } 00:16:11.000 ] 00:16:11.000 }' 00:16:11.000 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:11.000 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:11.000 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:11.000 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.258 [2024-11-17 18:36:57.638648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:11.258 "tick_rate": 2700000000, 00:16:11.258 "poll_groups": [ 00:16:11.258 { 00:16:11.258 "name": "nvmf_tgt_poll_group_000", 00:16:11.258 "admin_qpairs": 0, 00:16:11.258 "io_qpairs": 0, 00:16:11.258 "current_admin_qpairs": 0, 00:16:11.258 "current_io_qpairs": 0, 00:16:11.258 "pending_bdev_io": 0, 00:16:11.258 "completed_nvme_io": 0, 00:16:11.258 "transports": [ 00:16:11.258 { 00:16:11.258 "trtype": "TCP" 00:16:11.258 } 00:16:11.258 ] 00:16:11.258 }, 00:16:11.258 { 00:16:11.258 "name": "nvmf_tgt_poll_group_001", 00:16:11.258 "admin_qpairs": 0, 00:16:11.258 "io_qpairs": 0, 00:16:11.258 "current_admin_qpairs": 0, 00:16:11.258 "current_io_qpairs": 0, 00:16:11.258 "pending_bdev_io": 0, 00:16:11.258 "completed_nvme_io": 0, 00:16:11.258 "transports": [ 00:16:11.258 { 00:16:11.258 "trtype": "TCP" 00:16:11.258 } 00:16:11.258 ] 00:16:11.258 }, 00:16:11.258 { 00:16:11.258 "name": "nvmf_tgt_poll_group_002", 00:16:11.258 "admin_qpairs": 0, 00:16:11.258 "io_qpairs": 0, 00:16:11.258 "current_admin_qpairs": 0, 00:16:11.258 "current_io_qpairs": 0, 00:16:11.258 "pending_bdev_io": 0, 00:16:11.258 "completed_nvme_io": 0, 00:16:11.258 "transports": [ 00:16:11.258 { 00:16:11.258 "trtype": "TCP" 00:16:11.258 } 00:16:11.258 ] 00:16:11.258 }, 00:16:11.258 { 00:16:11.258 "name": "nvmf_tgt_poll_group_003", 00:16:11.258 "admin_qpairs": 0, 00:16:11.258 "io_qpairs": 0, 00:16:11.258 "current_admin_qpairs": 0, 00:16:11.258 "current_io_qpairs": 0, 00:16:11.258 "pending_bdev_io": 0, 00:16:11.258 "completed_nvme_io": 0, 00:16:11.258 "transports": [ 00:16:11.258 { 00:16:11.258 "trtype": "TCP" 00:16:11.258 } 00:16:11.258 ] 00:16:11.258 } 00:16:11.258 ] 00:16:11.258 }' 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.258 Malloc1 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.258 [2024-11-17 18:36:57.810605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:11.258 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:11.258 [2024-11-17 18:36:57.833197] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:11.516 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:11.516 could not add new controller: failed to write to nvme-fabrics device 00:16:11.516 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:11.516 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.516 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.516 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.516 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.516 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.516 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.516 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.516 18:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:12.081 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:12.081 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:12.081 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.081 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:12.081 18:36:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:13.979 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:13.979 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:13.979 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:13.979 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:13.979 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.979 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:13.979 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.979 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:13.979 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:13.979 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:13.979 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.237 [2024-11-17 18:37:00.596539] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:14.237 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:14.237 could not add new controller: failed to write to nvme-fabrics device 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.237 18:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.802 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:14.802 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:14.802 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.802 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:14.802 18:37:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:16.699 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:16.699 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:16.699 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:16.699 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:16.699 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.699 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:16.699 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:16.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.957 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.958 [2024-11-17 18:37:03.402129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.958 18:37:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.941 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:17.941 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:17.941 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.941 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:17.941 18:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.874 [2024-11-17 18:37:06.282766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.874 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:19.875 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.875 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.875 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.875 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.441 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:20.441 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:20.441 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.441 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:20.441 18:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:22.967 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:22.967 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:22.967 18:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.967 [2024-11-17 18:37:09.119412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.967 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:23.532 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:23.532 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:23.532 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:23.533 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:23.533 18:37:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.430 [2024-11-17 18:37:11.989747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.430 18:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.430 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.430 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:25.430 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.430 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.687 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.687 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:26.253 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:26.253 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:26.253 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.253 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:26.253 18:37:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:28.150 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:28.150 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:28.150 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:28.150 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:28.150 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.150 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:28.150 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.408 [2024-11-17 18:37:14.774516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.408 18:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:28.973 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:28.973 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:28.973 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.973 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:28.973 18:37:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:30.867 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:30.867 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:30.867 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.867 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:30.867 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.867 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:30.867 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 [2024-11-17 18:37:17.524291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 [2024-11-17 18:37:17.572313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.126 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 [2024-11-17 18:37:17.620480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 [2024-11-17 18:37:17.668644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.127 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 [2024-11-17 18:37:17.716852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.385 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:31.385 "tick_rate": 2700000000, 00:16:31.385 "poll_groups": [ 00:16:31.385 { 00:16:31.385 "name": "nvmf_tgt_poll_group_000", 00:16:31.385 "admin_qpairs": 2, 00:16:31.385 "io_qpairs": 84, 00:16:31.385 "current_admin_qpairs": 0, 00:16:31.385 "current_io_qpairs": 0, 00:16:31.385 "pending_bdev_io": 0, 00:16:31.385 "completed_nvme_io": 232, 00:16:31.385 "transports": [ 00:16:31.385 { 00:16:31.385 "trtype": "TCP" 00:16:31.385 } 00:16:31.385 ] 00:16:31.385 }, 00:16:31.385 { 00:16:31.385 "name": "nvmf_tgt_poll_group_001", 00:16:31.385 "admin_qpairs": 2, 00:16:31.385 "io_qpairs": 84, 00:16:31.385 "current_admin_qpairs": 0, 00:16:31.385 "current_io_qpairs": 0, 00:16:31.385 "pending_bdev_io": 0, 00:16:31.385 "completed_nvme_io": 136, 00:16:31.385 "transports": [ 00:16:31.385 { 00:16:31.386 "trtype": "TCP" 00:16:31.386 } 00:16:31.386 ] 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "name": "nvmf_tgt_poll_group_002", 00:16:31.386 "admin_qpairs": 1, 00:16:31.386 "io_qpairs": 84, 00:16:31.386 "current_admin_qpairs": 0, 00:16:31.386 "current_io_qpairs": 0, 00:16:31.386 "pending_bdev_io": 0, 00:16:31.386 "completed_nvme_io": 184, 00:16:31.386 "transports": [ 00:16:31.386 { 00:16:31.386 "trtype": "TCP" 00:16:31.386 } 00:16:31.386 ] 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "name": "nvmf_tgt_poll_group_003", 00:16:31.386 "admin_qpairs": 2, 00:16:31.386 "io_qpairs": 84, 00:16:31.386 "current_admin_qpairs": 0, 00:16:31.386 "current_io_qpairs": 0, 00:16:31.386 "pending_bdev_io": 0, 00:16:31.386 "completed_nvme_io": 134, 00:16:31.386 "transports": [ 00:16:31.386 { 00:16:31.386 "trtype": "TCP" 00:16:31.386 } 00:16:31.386 ] 00:16:31.386 } 00:16:31.386 ] 00:16:31.386 }' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:31.386 rmmod nvme_tcp 00:16:31.386 rmmod nvme_fabrics 00:16:31.386 rmmod nvme_keyring 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 700638 ']' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 700638 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 700638 ']' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 700638 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 700638 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 700638' 00:16:31.386 killing process with pid 700638 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 700638 00:16:31.386 18:37:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 700638 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.646 18:37:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:34.186 00:16:34.186 real 0m25.512s 00:16:34.186 user 1m22.418s 00:16:34.186 sys 0m4.381s 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.186 ************************************ 00:16:34.186 END TEST nvmf_rpc 00:16:34.186 ************************************ 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.186 ************************************ 00:16:34.186 START TEST nvmf_invalid 00:16:34.186 ************************************ 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:34.186 * Looking for test storage... 00:16:34.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:34.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.186 --rc genhtml_branch_coverage=1 00:16:34.186 --rc genhtml_function_coverage=1 00:16:34.186 --rc genhtml_legend=1 00:16:34.186 --rc geninfo_all_blocks=1 00:16:34.186 --rc geninfo_unexecuted_blocks=1 00:16:34.186 00:16:34.186 ' 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:34.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.186 --rc genhtml_branch_coverage=1 00:16:34.186 --rc genhtml_function_coverage=1 00:16:34.186 --rc genhtml_legend=1 00:16:34.186 --rc geninfo_all_blocks=1 00:16:34.186 --rc geninfo_unexecuted_blocks=1 00:16:34.186 00:16:34.186 ' 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:34.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.186 --rc genhtml_branch_coverage=1 00:16:34.186 --rc genhtml_function_coverage=1 00:16:34.186 --rc genhtml_legend=1 00:16:34.186 --rc geninfo_all_blocks=1 00:16:34.186 --rc geninfo_unexecuted_blocks=1 00:16:34.186 00:16:34.186 ' 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:34.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.186 --rc genhtml_branch_coverage=1 00:16:34.186 --rc genhtml_function_coverage=1 00:16:34.186 --rc genhtml_legend=1 00:16:34.186 --rc geninfo_all_blocks=1 00:16:34.186 --rc geninfo_unexecuted_blocks=1 00:16:34.186 00:16:34.186 ' 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.186 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:34.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:34.187 18:37:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:36.091 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:36.091 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:36.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:36.091 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:36.091 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.092 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.350 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.350 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.350 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.350 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:16:36.350 00:16:36.350 --- 10.0.0.2 ping statistics --- 00:16:36.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.350 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:16:36.350 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:16:36.350 00:16:36.350 --- 10.0.0.1 ping statistics --- 00:16:36.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.350 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:16:36.350 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.350 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=705147 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 705147 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 705147 ']' 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.351 18:37:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:36.351 [2024-11-17 18:37:22.792045] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:36.351 [2024-11-17 18:37:22.792140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.351 [2024-11-17 18:37:22.863711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.351 [2024-11-17 18:37:22.906878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.351 [2024-11-17 18:37:22.906936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.351 [2024-11-17 18:37:22.906964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.351 [2024-11-17 18:37:22.906975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.351 [2024-11-17 18:37:22.906985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.351 [2024-11-17 18:37:22.908571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.351 [2024-11-17 18:37:22.908637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.351 [2024-11-17 18:37:22.908758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.351 [2024-11-17 18:37:22.908762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.609 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.609 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:36.609 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:36.609 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.609 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:36.609 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.609 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:36.609 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7477 00:16:36.867 [2024-11-17 18:37:23.316531] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:36.867 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:36.867 { 00:16:36.867 "nqn": "nqn.2016-06.io.spdk:cnode7477", 00:16:36.867 "tgt_name": "foobar", 00:16:36.867 "method": "nvmf_create_subsystem", 00:16:36.867 "req_id": 1 00:16:36.867 } 00:16:36.867 Got JSON-RPC error response 00:16:36.868 response: 00:16:36.868 { 00:16:36.868 "code": -32603, 00:16:36.868 "message": "Unable to find target foobar" 00:16:36.868 }' 00:16:36.868 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:36.868 { 00:16:36.868 "nqn": "nqn.2016-06.io.spdk:cnode7477", 00:16:36.868 "tgt_name": "foobar", 00:16:36.868 "method": "nvmf_create_subsystem", 00:16:36.868 "req_id": 1 00:16:36.868 } 00:16:36.868 Got JSON-RPC error response 00:16:36.868 response: 00:16:36.868 { 00:16:36.868 "code": -32603, 00:16:36.868 "message": "Unable to find target foobar" 00:16:36.868 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:36.868 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:36.868 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11559 00:16:37.125 [2024-11-17 18:37:23.597485] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11559: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:37.125 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:37.125 { 00:16:37.125 "nqn": "nqn.2016-06.io.spdk:cnode11559", 00:16:37.125 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:37.125 "method": "nvmf_create_subsystem", 00:16:37.125 "req_id": 1 00:16:37.125 } 00:16:37.126 Got JSON-RPC error response 00:16:37.126 response: 00:16:37.126 { 00:16:37.126 "code": -32602, 00:16:37.126 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:37.126 }' 00:16:37.126 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:37.126 { 00:16:37.126 "nqn": "nqn.2016-06.io.spdk:cnode11559", 00:16:37.126 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:37.126 "method": "nvmf_create_subsystem", 00:16:37.126 "req_id": 1 00:16:37.126 } 00:16:37.126 Got JSON-RPC error response 00:16:37.126 response: 00:16:37.126 { 00:16:37.126 "code": -32602, 00:16:37.126 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:37.126 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:37.126 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:37.126 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7605 00:16:37.384 [2024-11-17 18:37:23.874373] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7605: invalid model number 'SPDK_Controller' 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:37.384 { 00:16:37.384 "nqn": "nqn.2016-06.io.spdk:cnode7605", 00:16:37.384 "model_number": "SPDK_Controller\u001f", 00:16:37.384 "method": "nvmf_create_subsystem", 00:16:37.384 "req_id": 1 00:16:37.384 } 00:16:37.384 Got JSON-RPC error response 00:16:37.384 response: 00:16:37.384 { 00:16:37.384 "code": -32602, 00:16:37.384 "message": "Invalid MN SPDK_Controller\u001f" 00:16:37.384 }' 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:37.384 { 00:16:37.384 "nqn": "nqn.2016-06.io.spdk:cnode7605", 00:16:37.384 "model_number": "SPDK_Controller\u001f", 00:16:37.384 "method": "nvmf_create_subsystem", 00:16:37.384 "req_id": 1 00:16:37.384 } 00:16:37.384 Got JSON-RPC error response 00:16:37.384 response: 00:16:37.384 { 00:16:37.384 "code": -32602, 00:16:37.384 "message": "Invalid MN SPDK_Controller\u001f" 00:16:37.384 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:37.384 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:37.385 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.642 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.643 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:16:37.643 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'b/YPeDa2,r"BuRhiIO'\'']l' 00:16:37.643 18:37:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'b/YPeDa2,r"BuRhiIO'\'']l' nqn.2016-06.io.spdk:cnode15110 00:16:37.902 [2024-11-17 18:37:24.227532] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15110: invalid serial number 'b/YPeDa2,r"BuRhiIO']l' 00:16:37.902 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:37.902 { 00:16:37.902 "nqn": "nqn.2016-06.io.spdk:cnode15110", 00:16:37.902 "serial_number": "b/YPeDa2,r\"BuRhiIO'\'']l", 00:16:37.902 "method": "nvmf_create_subsystem", 00:16:37.902 "req_id": 1 00:16:37.902 } 00:16:37.902 Got JSON-RPC error response 00:16:37.902 response: 00:16:37.902 { 00:16:37.902 "code": -32602, 00:16:37.902 "message": "Invalid SN b/YPeDa2,r\"BuRhiIO'\'']l" 00:16:37.902 }' 00:16:37.902 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:37.902 { 00:16:37.902 "nqn": "nqn.2016-06.io.spdk:cnode15110", 00:16:37.902 "serial_number": "b/YPeDa2,r\"BuRhiIO']l", 00:16:37.902 "method": "nvmf_create_subsystem", 00:16:37.902 "req_id": 1 00:16:37.902 } 00:16:37.902 Got JSON-RPC error response 00:16:37.902 response: 00:16:37.902 { 00:16:37.902 "code": -32602, 00:16:37.902 "message": "Invalid SN b/YPeDa2,r\"BuRhiIO']l" 00:16:37.902 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:37.902 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:37.902 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:37.902 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:37.902 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:37.902 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:37.902 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:37.902 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.903 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:16:37.904 18:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'f"=xT0>,@CuPOH,j~k3|t-I"dFW,@CuPOH,j~k3|t-I"dFW,@CuPOH,j~k3|t-I"dFW,@CuPOH,j~k3|t-I\"dFW,@CuPOH,j~k3|t-I\"dFW /dev/null' 00:16:40.742 18:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:43.276 00:16:43.276 real 0m9.070s 00:16:43.276 user 0m21.638s 00:16:43.276 sys 0m2.498s 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:43.276 ************************************ 00:16:43.276 END TEST nvmf_invalid 00:16:43.276 ************************************ 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:43.276 ************************************ 00:16:43.276 START TEST nvmf_connect_stress 00:16:43.276 ************************************ 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:43.276 * Looking for test storage... 00:16:43.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:16:43.276 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:43.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.277 --rc genhtml_branch_coverage=1 00:16:43.277 --rc genhtml_function_coverage=1 00:16:43.277 --rc genhtml_legend=1 00:16:43.277 --rc geninfo_all_blocks=1 00:16:43.277 --rc geninfo_unexecuted_blocks=1 00:16:43.277 00:16:43.277 ' 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:43.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.277 --rc genhtml_branch_coverage=1 00:16:43.277 --rc genhtml_function_coverage=1 00:16:43.277 --rc genhtml_legend=1 00:16:43.277 --rc geninfo_all_blocks=1 00:16:43.277 --rc geninfo_unexecuted_blocks=1 00:16:43.277 00:16:43.277 ' 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:43.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.277 --rc genhtml_branch_coverage=1 00:16:43.277 --rc genhtml_function_coverage=1 00:16:43.277 --rc genhtml_legend=1 00:16:43.277 --rc geninfo_all_blocks=1 00:16:43.277 --rc geninfo_unexecuted_blocks=1 00:16:43.277 00:16:43.277 ' 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:43.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.277 --rc genhtml_branch_coverage=1 00:16:43.277 --rc genhtml_function_coverage=1 00:16:43.277 --rc genhtml_legend=1 00:16:43.277 --rc geninfo_all_blocks=1 00:16:43.277 --rc geninfo_unexecuted_blocks=1 00:16:43.277 00:16:43.277 ' 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:43.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:16:43.277 18:37:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:45.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.809 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:45.810 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:45.810 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:45.810 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:45.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:16:45.810 00:16:45.810 --- 10.0.0.2 ping statistics --- 00:16:45.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.810 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:16:45.810 00:16:45.810 --- 10.0.0.1 ping statistics --- 00:16:45.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.810 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=707788 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 707788 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 707788 ']' 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.810 18:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.810 [2024-11-17 18:37:31.992926] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:16:45.810 [2024-11-17 18:37:31.993022] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.810 [2024-11-17 18:37:32.068895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:45.810 [2024-11-17 18:37:32.117901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.810 [2024-11-17 18:37:32.117972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.810 [2024-11-17 18:37:32.118000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.810 [2024-11-17 18:37:32.118013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.810 [2024-11-17 18:37:32.118023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.810 [2024-11-17 18:37:32.119684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.810 [2024-11-17 18:37:32.119744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.810 [2024-11-17 18:37:32.119747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.810 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.811 [2024-11-17 18:37:32.269194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.811 [2024-11-17 18:37:32.286609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:45.811 NULL1 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=707880 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.811 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.376 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.376 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:46.376 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.376 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.376 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.634 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:46.634 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.634 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.634 18:37:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:46.891 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.891 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:46.891 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:46.891 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.891 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.148 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.148 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:47.148 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.148 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.148 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.407 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.407 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:47.407 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.407 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.407 18:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:47.973 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.973 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:47.973 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:47.973 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.973 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.231 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.231 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:48.231 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.231 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.231 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.488 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.488 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:48.488 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.488 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.488 18:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:48.746 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.746 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:48.746 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:48.746 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.746 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.003 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.003 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:49.003 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.003 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.003 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.568 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.568 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:49.568 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.569 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.569 18:37:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:49.826 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.826 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:49.826 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:49.826 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.826 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.083 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.083 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:50.083 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.083 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.083 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.341 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.341 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:50.341 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.341 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.341 18:37:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:50.599 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.599 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:50.599 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:50.599 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.599 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.164 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.164 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:51.164 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.164 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.164 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.421 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.421 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:51.421 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.421 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.421 18:37:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.682 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.682 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:51.682 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.682 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.682 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.012 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.012 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:52.012 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.012 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.012 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.298 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.298 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:52.298 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.298 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.298 18:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.556 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.556 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:52.556 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.556 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.556 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.129 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.129 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:53.129 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.129 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.129 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.387 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.387 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:53.387 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.387 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.387 18:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.644 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.644 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:53.644 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.644 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.644 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.902 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.902 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:53.902 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.902 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.902 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.159 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.159 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:54.160 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.160 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.160 18:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.725 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.725 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:54.725 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.725 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.725 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:54.982 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.982 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:54.982 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.982 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.982 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.240 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.240 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:55.240 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.240 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.240 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.497 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.497 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:55.497 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.497 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.497 18:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:55.755 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.755 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:55.755 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.755 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.755 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:56.013 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 707880 00:16:56.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (707880) - No such process 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 707880 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:56.271 rmmod nvme_tcp 00:16:56.271 rmmod nvme_fabrics 00:16:56.271 rmmod nvme_keyring 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 707788 ']' 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 707788 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 707788 ']' 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 707788 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 707788 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 707788' 00:16:56.271 killing process with pid 707788 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 707788 00:16:56.271 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 707788 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.530 18:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.436 18:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:58.436 00:16:58.436 real 0m15.607s 00:16:58.436 user 0m38.776s 00:16:58.436 sys 0m6.031s 00:16:58.436 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.436 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:58.436 ************************************ 00:16:58.436 END TEST nvmf_connect_stress 00:16:58.436 ************************************ 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:58.695 ************************************ 00:16:58.695 START TEST nvmf_fused_ordering 00:16:58.695 ************************************ 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:58.695 * Looking for test storage... 00:16:58.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:58.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.695 --rc genhtml_branch_coverage=1 00:16:58.695 --rc genhtml_function_coverage=1 00:16:58.695 --rc genhtml_legend=1 00:16:58.695 --rc geninfo_all_blocks=1 00:16:58.695 --rc geninfo_unexecuted_blocks=1 00:16:58.695 00:16:58.695 ' 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:58.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.695 --rc genhtml_branch_coverage=1 00:16:58.695 --rc genhtml_function_coverage=1 00:16:58.695 --rc genhtml_legend=1 00:16:58.695 --rc geninfo_all_blocks=1 00:16:58.695 --rc geninfo_unexecuted_blocks=1 00:16:58.695 00:16:58.695 ' 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:58.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.695 --rc genhtml_branch_coverage=1 00:16:58.695 --rc genhtml_function_coverage=1 00:16:58.695 --rc genhtml_legend=1 00:16:58.695 --rc geninfo_all_blocks=1 00:16:58.695 --rc geninfo_unexecuted_blocks=1 00:16:58.695 00:16:58.695 ' 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:58.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.695 --rc genhtml_branch_coverage=1 00:16:58.695 --rc genhtml_function_coverage=1 00:16:58.695 --rc genhtml_legend=1 00:16:58.695 --rc geninfo_all_blocks=1 00:16:58.695 --rc geninfo_unexecuted_blocks=1 00:16:58.695 00:16:58.695 ' 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.695 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:58.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:58.696 18:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:01.228 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:01.228 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:01.228 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:01.228 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:01.229 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:01.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:17:01.229 00:17:01.229 --- 10.0.0.2 ping statistics --- 00:17:01.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.229 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:17:01.229 00:17:01.229 --- 10.0.0.1 ping statistics --- 00:17:01.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.229 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=711081 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 711081 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 711081 ']' 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.229 [2024-11-17 18:37:47.495974] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:01.229 [2024-11-17 18:37:47.496066] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.229 [2024-11-17 18:37:47.567550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.229 [2024-11-17 18:37:47.613785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.229 [2024-11-17 18:37:47.613838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.229 [2024-11-17 18:37:47.613868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.229 [2024-11-17 18:37:47.613881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.229 [2024-11-17 18:37:47.613892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.229 [2024-11-17 18:37:47.614502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.229 [2024-11-17 18:37:47.766700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.229 [2024-11-17 18:37:47.782887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.229 NULL1 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.229 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.488 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:01.488 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.488 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:01.488 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.488 18:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:01.488 [2024-11-17 18:37:47.825737] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:01.488 [2024-11-17 18:37:47.825773] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711115 ] 00:17:02.053 Attached to nqn.2016-06.io.spdk:cnode1 00:17:02.053 Namespace ID: 1 size: 1GB 00:17:02.053 fused_ordering(0) 00:17:02.053 fused_ordering(1) 00:17:02.053 fused_ordering(2) 00:17:02.053 fused_ordering(3) 00:17:02.053 fused_ordering(4) 00:17:02.053 fused_ordering(5) 00:17:02.053 fused_ordering(6) 00:17:02.053 fused_ordering(7) 00:17:02.053 fused_ordering(8) 00:17:02.053 fused_ordering(9) 00:17:02.053 fused_ordering(10) 00:17:02.053 fused_ordering(11) 00:17:02.053 fused_ordering(12) 00:17:02.053 fused_ordering(13) 00:17:02.053 fused_ordering(14) 00:17:02.053 fused_ordering(15) 00:17:02.053 fused_ordering(16) 00:17:02.053 fused_ordering(17) 00:17:02.053 fused_ordering(18) 00:17:02.053 fused_ordering(19) 00:17:02.053 fused_ordering(20) 00:17:02.053 fused_ordering(21) 00:17:02.053 fused_ordering(22) 00:17:02.053 fused_ordering(23) 00:17:02.053 fused_ordering(24) 00:17:02.053 fused_ordering(25) 00:17:02.053 fused_ordering(26) 00:17:02.053 fused_ordering(27) 00:17:02.053 fused_ordering(28) 00:17:02.053 fused_ordering(29) 00:17:02.053 fused_ordering(30) 00:17:02.053 fused_ordering(31) 00:17:02.053 fused_ordering(32) 00:17:02.053 fused_ordering(33) 00:17:02.053 fused_ordering(34) 00:17:02.053 fused_ordering(35) 00:17:02.053 fused_ordering(36) 00:17:02.053 fused_ordering(37) 00:17:02.053 fused_ordering(38) 00:17:02.053 fused_ordering(39) 00:17:02.053 fused_ordering(40) 00:17:02.053 fused_ordering(41) 00:17:02.053 fused_ordering(42) 00:17:02.053 fused_ordering(43) 00:17:02.053 fused_ordering(44) 00:17:02.053 fused_ordering(45) 00:17:02.053 fused_ordering(46) 00:17:02.053 fused_ordering(47) 00:17:02.053 fused_ordering(48) 00:17:02.053 fused_ordering(49) 00:17:02.053 fused_ordering(50) 00:17:02.053 fused_ordering(51) 00:17:02.053 fused_ordering(52) 00:17:02.053 fused_ordering(53) 00:17:02.053 fused_ordering(54) 00:17:02.053 fused_ordering(55) 00:17:02.053 fused_ordering(56) 00:17:02.053 fused_ordering(57) 00:17:02.053 fused_ordering(58) 00:17:02.053 fused_ordering(59) 00:17:02.053 fused_ordering(60) 00:17:02.053 fused_ordering(61) 00:17:02.053 fused_ordering(62) 00:17:02.053 fused_ordering(63) 00:17:02.053 fused_ordering(64) 00:17:02.053 fused_ordering(65) 00:17:02.053 fused_ordering(66) 00:17:02.053 fused_ordering(67) 00:17:02.053 fused_ordering(68) 00:17:02.053 fused_ordering(69) 00:17:02.053 fused_ordering(70) 00:17:02.053 fused_ordering(71) 00:17:02.053 fused_ordering(72) 00:17:02.053 fused_ordering(73) 00:17:02.053 fused_ordering(74) 00:17:02.053 fused_ordering(75) 00:17:02.053 fused_ordering(76) 00:17:02.053 fused_ordering(77) 00:17:02.053 fused_ordering(78) 00:17:02.053 fused_ordering(79) 00:17:02.053 fused_ordering(80) 00:17:02.053 fused_ordering(81) 00:17:02.053 fused_ordering(82) 00:17:02.053 fused_ordering(83) 00:17:02.053 fused_ordering(84) 00:17:02.053 fused_ordering(85) 00:17:02.053 fused_ordering(86) 00:17:02.053 fused_ordering(87) 00:17:02.053 fused_ordering(88) 00:17:02.053 fused_ordering(89) 00:17:02.053 fused_ordering(90) 00:17:02.053 fused_ordering(91) 00:17:02.053 fused_ordering(92) 00:17:02.053 fused_ordering(93) 00:17:02.053 fused_ordering(94) 00:17:02.053 fused_ordering(95) 00:17:02.053 fused_ordering(96) 00:17:02.053 fused_ordering(97) 00:17:02.053 fused_ordering(98) 00:17:02.053 fused_ordering(99) 00:17:02.053 fused_ordering(100) 00:17:02.053 fused_ordering(101) 00:17:02.053 fused_ordering(102) 00:17:02.053 fused_ordering(103) 00:17:02.053 fused_ordering(104) 00:17:02.053 fused_ordering(105) 00:17:02.053 fused_ordering(106) 00:17:02.053 fused_ordering(107) 00:17:02.053 fused_ordering(108) 00:17:02.053 fused_ordering(109) 00:17:02.053 fused_ordering(110) 00:17:02.053 fused_ordering(111) 00:17:02.053 fused_ordering(112) 00:17:02.053 fused_ordering(113) 00:17:02.053 fused_ordering(114) 00:17:02.053 fused_ordering(115) 00:17:02.053 fused_ordering(116) 00:17:02.053 fused_ordering(117) 00:17:02.053 fused_ordering(118) 00:17:02.053 fused_ordering(119) 00:17:02.053 fused_ordering(120) 00:17:02.053 fused_ordering(121) 00:17:02.053 fused_ordering(122) 00:17:02.053 fused_ordering(123) 00:17:02.053 fused_ordering(124) 00:17:02.053 fused_ordering(125) 00:17:02.053 fused_ordering(126) 00:17:02.053 fused_ordering(127) 00:17:02.053 fused_ordering(128) 00:17:02.053 fused_ordering(129) 00:17:02.053 fused_ordering(130) 00:17:02.053 fused_ordering(131) 00:17:02.053 fused_ordering(132) 00:17:02.053 fused_ordering(133) 00:17:02.053 fused_ordering(134) 00:17:02.053 fused_ordering(135) 00:17:02.053 fused_ordering(136) 00:17:02.053 fused_ordering(137) 00:17:02.053 fused_ordering(138) 00:17:02.053 fused_ordering(139) 00:17:02.053 fused_ordering(140) 00:17:02.053 fused_ordering(141) 00:17:02.053 fused_ordering(142) 00:17:02.053 fused_ordering(143) 00:17:02.053 fused_ordering(144) 00:17:02.053 fused_ordering(145) 00:17:02.053 fused_ordering(146) 00:17:02.053 fused_ordering(147) 00:17:02.053 fused_ordering(148) 00:17:02.053 fused_ordering(149) 00:17:02.053 fused_ordering(150) 00:17:02.053 fused_ordering(151) 00:17:02.053 fused_ordering(152) 00:17:02.053 fused_ordering(153) 00:17:02.053 fused_ordering(154) 00:17:02.053 fused_ordering(155) 00:17:02.053 fused_ordering(156) 00:17:02.053 fused_ordering(157) 00:17:02.053 fused_ordering(158) 00:17:02.053 fused_ordering(159) 00:17:02.053 fused_ordering(160) 00:17:02.053 fused_ordering(161) 00:17:02.053 fused_ordering(162) 00:17:02.053 fused_ordering(163) 00:17:02.053 fused_ordering(164) 00:17:02.053 fused_ordering(165) 00:17:02.053 fused_ordering(166) 00:17:02.053 fused_ordering(167) 00:17:02.053 fused_ordering(168) 00:17:02.053 fused_ordering(169) 00:17:02.053 fused_ordering(170) 00:17:02.053 fused_ordering(171) 00:17:02.053 fused_ordering(172) 00:17:02.053 fused_ordering(173) 00:17:02.053 fused_ordering(174) 00:17:02.053 fused_ordering(175) 00:17:02.053 fused_ordering(176) 00:17:02.053 fused_ordering(177) 00:17:02.053 fused_ordering(178) 00:17:02.053 fused_ordering(179) 00:17:02.053 fused_ordering(180) 00:17:02.053 fused_ordering(181) 00:17:02.053 fused_ordering(182) 00:17:02.053 fused_ordering(183) 00:17:02.053 fused_ordering(184) 00:17:02.053 fused_ordering(185) 00:17:02.053 fused_ordering(186) 00:17:02.053 fused_ordering(187) 00:17:02.053 fused_ordering(188) 00:17:02.053 fused_ordering(189) 00:17:02.053 fused_ordering(190) 00:17:02.054 fused_ordering(191) 00:17:02.054 fused_ordering(192) 00:17:02.054 fused_ordering(193) 00:17:02.054 fused_ordering(194) 00:17:02.054 fused_ordering(195) 00:17:02.054 fused_ordering(196) 00:17:02.054 fused_ordering(197) 00:17:02.054 fused_ordering(198) 00:17:02.054 fused_ordering(199) 00:17:02.054 fused_ordering(200) 00:17:02.054 fused_ordering(201) 00:17:02.054 fused_ordering(202) 00:17:02.054 fused_ordering(203) 00:17:02.054 fused_ordering(204) 00:17:02.054 fused_ordering(205) 00:17:02.311 fused_ordering(206) 00:17:02.311 fused_ordering(207) 00:17:02.311 fused_ordering(208) 00:17:02.311 fused_ordering(209) 00:17:02.311 fused_ordering(210) 00:17:02.311 fused_ordering(211) 00:17:02.311 fused_ordering(212) 00:17:02.311 fused_ordering(213) 00:17:02.311 fused_ordering(214) 00:17:02.311 fused_ordering(215) 00:17:02.311 fused_ordering(216) 00:17:02.311 fused_ordering(217) 00:17:02.311 fused_ordering(218) 00:17:02.311 fused_ordering(219) 00:17:02.311 fused_ordering(220) 00:17:02.311 fused_ordering(221) 00:17:02.311 fused_ordering(222) 00:17:02.311 fused_ordering(223) 00:17:02.311 fused_ordering(224) 00:17:02.311 fused_ordering(225) 00:17:02.311 fused_ordering(226) 00:17:02.311 fused_ordering(227) 00:17:02.311 fused_ordering(228) 00:17:02.311 fused_ordering(229) 00:17:02.311 fused_ordering(230) 00:17:02.311 fused_ordering(231) 00:17:02.311 fused_ordering(232) 00:17:02.311 fused_ordering(233) 00:17:02.311 fused_ordering(234) 00:17:02.311 fused_ordering(235) 00:17:02.311 fused_ordering(236) 00:17:02.311 fused_ordering(237) 00:17:02.311 fused_ordering(238) 00:17:02.311 fused_ordering(239) 00:17:02.311 fused_ordering(240) 00:17:02.311 fused_ordering(241) 00:17:02.311 fused_ordering(242) 00:17:02.311 fused_ordering(243) 00:17:02.311 fused_ordering(244) 00:17:02.311 fused_ordering(245) 00:17:02.311 fused_ordering(246) 00:17:02.311 fused_ordering(247) 00:17:02.311 fused_ordering(248) 00:17:02.311 fused_ordering(249) 00:17:02.311 fused_ordering(250) 00:17:02.311 fused_ordering(251) 00:17:02.311 fused_ordering(252) 00:17:02.311 fused_ordering(253) 00:17:02.311 fused_ordering(254) 00:17:02.311 fused_ordering(255) 00:17:02.311 fused_ordering(256) 00:17:02.311 fused_ordering(257) 00:17:02.311 fused_ordering(258) 00:17:02.311 fused_ordering(259) 00:17:02.311 fused_ordering(260) 00:17:02.311 fused_ordering(261) 00:17:02.311 fused_ordering(262) 00:17:02.311 fused_ordering(263) 00:17:02.311 fused_ordering(264) 00:17:02.311 fused_ordering(265) 00:17:02.311 fused_ordering(266) 00:17:02.311 fused_ordering(267) 00:17:02.311 fused_ordering(268) 00:17:02.311 fused_ordering(269) 00:17:02.311 fused_ordering(270) 00:17:02.311 fused_ordering(271) 00:17:02.311 fused_ordering(272) 00:17:02.311 fused_ordering(273) 00:17:02.311 fused_ordering(274) 00:17:02.311 fused_ordering(275) 00:17:02.311 fused_ordering(276) 00:17:02.311 fused_ordering(277) 00:17:02.311 fused_ordering(278) 00:17:02.311 fused_ordering(279) 00:17:02.311 fused_ordering(280) 00:17:02.311 fused_ordering(281) 00:17:02.311 fused_ordering(282) 00:17:02.311 fused_ordering(283) 00:17:02.311 fused_ordering(284) 00:17:02.311 fused_ordering(285) 00:17:02.311 fused_ordering(286) 00:17:02.311 fused_ordering(287) 00:17:02.311 fused_ordering(288) 00:17:02.311 fused_ordering(289) 00:17:02.311 fused_ordering(290) 00:17:02.311 fused_ordering(291) 00:17:02.311 fused_ordering(292) 00:17:02.311 fused_ordering(293) 00:17:02.311 fused_ordering(294) 00:17:02.311 fused_ordering(295) 00:17:02.311 fused_ordering(296) 00:17:02.311 fused_ordering(297) 00:17:02.311 fused_ordering(298) 00:17:02.311 fused_ordering(299) 00:17:02.311 fused_ordering(300) 00:17:02.311 fused_ordering(301) 00:17:02.311 fused_ordering(302) 00:17:02.311 fused_ordering(303) 00:17:02.311 fused_ordering(304) 00:17:02.311 fused_ordering(305) 00:17:02.311 fused_ordering(306) 00:17:02.311 fused_ordering(307) 00:17:02.311 fused_ordering(308) 00:17:02.311 fused_ordering(309) 00:17:02.311 fused_ordering(310) 00:17:02.311 fused_ordering(311) 00:17:02.311 fused_ordering(312) 00:17:02.311 fused_ordering(313) 00:17:02.311 fused_ordering(314) 00:17:02.311 fused_ordering(315) 00:17:02.311 fused_ordering(316) 00:17:02.311 fused_ordering(317) 00:17:02.311 fused_ordering(318) 00:17:02.311 fused_ordering(319) 00:17:02.311 fused_ordering(320) 00:17:02.311 fused_ordering(321) 00:17:02.311 fused_ordering(322) 00:17:02.311 fused_ordering(323) 00:17:02.311 fused_ordering(324) 00:17:02.311 fused_ordering(325) 00:17:02.311 fused_ordering(326) 00:17:02.311 fused_ordering(327) 00:17:02.311 fused_ordering(328) 00:17:02.311 fused_ordering(329) 00:17:02.311 fused_ordering(330) 00:17:02.311 fused_ordering(331) 00:17:02.311 fused_ordering(332) 00:17:02.311 fused_ordering(333) 00:17:02.311 fused_ordering(334) 00:17:02.311 fused_ordering(335) 00:17:02.311 fused_ordering(336) 00:17:02.311 fused_ordering(337) 00:17:02.311 fused_ordering(338) 00:17:02.311 fused_ordering(339) 00:17:02.311 fused_ordering(340) 00:17:02.311 fused_ordering(341) 00:17:02.312 fused_ordering(342) 00:17:02.312 fused_ordering(343) 00:17:02.312 fused_ordering(344) 00:17:02.312 fused_ordering(345) 00:17:02.312 fused_ordering(346) 00:17:02.312 fused_ordering(347) 00:17:02.312 fused_ordering(348) 00:17:02.312 fused_ordering(349) 00:17:02.312 fused_ordering(350) 00:17:02.312 fused_ordering(351) 00:17:02.312 fused_ordering(352) 00:17:02.312 fused_ordering(353) 00:17:02.312 fused_ordering(354) 00:17:02.312 fused_ordering(355) 00:17:02.312 fused_ordering(356) 00:17:02.312 fused_ordering(357) 00:17:02.312 fused_ordering(358) 00:17:02.312 fused_ordering(359) 00:17:02.312 fused_ordering(360) 00:17:02.312 fused_ordering(361) 00:17:02.312 fused_ordering(362) 00:17:02.312 fused_ordering(363) 00:17:02.312 fused_ordering(364) 00:17:02.312 fused_ordering(365) 00:17:02.312 fused_ordering(366) 00:17:02.312 fused_ordering(367) 00:17:02.312 fused_ordering(368) 00:17:02.312 fused_ordering(369) 00:17:02.312 fused_ordering(370) 00:17:02.312 fused_ordering(371) 00:17:02.312 fused_ordering(372) 00:17:02.312 fused_ordering(373) 00:17:02.312 fused_ordering(374) 00:17:02.312 fused_ordering(375) 00:17:02.312 fused_ordering(376) 00:17:02.312 fused_ordering(377) 00:17:02.312 fused_ordering(378) 00:17:02.312 fused_ordering(379) 00:17:02.312 fused_ordering(380) 00:17:02.312 fused_ordering(381) 00:17:02.312 fused_ordering(382) 00:17:02.312 fused_ordering(383) 00:17:02.312 fused_ordering(384) 00:17:02.312 fused_ordering(385) 00:17:02.312 fused_ordering(386) 00:17:02.312 fused_ordering(387) 00:17:02.312 fused_ordering(388) 00:17:02.312 fused_ordering(389) 00:17:02.312 fused_ordering(390) 00:17:02.312 fused_ordering(391) 00:17:02.312 fused_ordering(392) 00:17:02.312 fused_ordering(393) 00:17:02.312 fused_ordering(394) 00:17:02.312 fused_ordering(395) 00:17:02.312 fused_ordering(396) 00:17:02.312 fused_ordering(397) 00:17:02.312 fused_ordering(398) 00:17:02.312 fused_ordering(399) 00:17:02.312 fused_ordering(400) 00:17:02.312 fused_ordering(401) 00:17:02.312 fused_ordering(402) 00:17:02.312 fused_ordering(403) 00:17:02.312 fused_ordering(404) 00:17:02.312 fused_ordering(405) 00:17:02.312 fused_ordering(406) 00:17:02.312 fused_ordering(407) 00:17:02.312 fused_ordering(408) 00:17:02.312 fused_ordering(409) 00:17:02.312 fused_ordering(410) 00:17:02.569 fused_ordering(411) 00:17:02.569 fused_ordering(412) 00:17:02.569 fused_ordering(413) 00:17:02.569 fused_ordering(414) 00:17:02.569 fused_ordering(415) 00:17:02.569 fused_ordering(416) 00:17:02.569 fused_ordering(417) 00:17:02.569 fused_ordering(418) 00:17:02.569 fused_ordering(419) 00:17:02.569 fused_ordering(420) 00:17:02.569 fused_ordering(421) 00:17:02.569 fused_ordering(422) 00:17:02.569 fused_ordering(423) 00:17:02.569 fused_ordering(424) 00:17:02.569 fused_ordering(425) 00:17:02.569 fused_ordering(426) 00:17:02.569 fused_ordering(427) 00:17:02.569 fused_ordering(428) 00:17:02.569 fused_ordering(429) 00:17:02.569 fused_ordering(430) 00:17:02.569 fused_ordering(431) 00:17:02.569 fused_ordering(432) 00:17:02.569 fused_ordering(433) 00:17:02.569 fused_ordering(434) 00:17:02.569 fused_ordering(435) 00:17:02.569 fused_ordering(436) 00:17:02.569 fused_ordering(437) 00:17:02.569 fused_ordering(438) 00:17:02.569 fused_ordering(439) 00:17:02.569 fused_ordering(440) 00:17:02.569 fused_ordering(441) 00:17:02.569 fused_ordering(442) 00:17:02.569 fused_ordering(443) 00:17:02.569 fused_ordering(444) 00:17:02.569 fused_ordering(445) 00:17:02.569 fused_ordering(446) 00:17:02.569 fused_ordering(447) 00:17:02.569 fused_ordering(448) 00:17:02.569 fused_ordering(449) 00:17:02.569 fused_ordering(450) 00:17:02.569 fused_ordering(451) 00:17:02.569 fused_ordering(452) 00:17:02.569 fused_ordering(453) 00:17:02.569 fused_ordering(454) 00:17:02.569 fused_ordering(455) 00:17:02.569 fused_ordering(456) 00:17:02.569 fused_ordering(457) 00:17:02.569 fused_ordering(458) 00:17:02.569 fused_ordering(459) 00:17:02.569 fused_ordering(460) 00:17:02.569 fused_ordering(461) 00:17:02.569 fused_ordering(462) 00:17:02.569 fused_ordering(463) 00:17:02.569 fused_ordering(464) 00:17:02.569 fused_ordering(465) 00:17:02.569 fused_ordering(466) 00:17:02.569 fused_ordering(467) 00:17:02.569 fused_ordering(468) 00:17:02.569 fused_ordering(469) 00:17:02.569 fused_ordering(470) 00:17:02.569 fused_ordering(471) 00:17:02.569 fused_ordering(472) 00:17:02.569 fused_ordering(473) 00:17:02.569 fused_ordering(474) 00:17:02.569 fused_ordering(475) 00:17:02.569 fused_ordering(476) 00:17:02.569 fused_ordering(477) 00:17:02.569 fused_ordering(478) 00:17:02.569 fused_ordering(479) 00:17:02.569 fused_ordering(480) 00:17:02.569 fused_ordering(481) 00:17:02.569 fused_ordering(482) 00:17:02.569 fused_ordering(483) 00:17:02.569 fused_ordering(484) 00:17:02.569 fused_ordering(485) 00:17:02.569 fused_ordering(486) 00:17:02.569 fused_ordering(487) 00:17:02.569 fused_ordering(488) 00:17:02.569 fused_ordering(489) 00:17:02.569 fused_ordering(490) 00:17:02.569 fused_ordering(491) 00:17:02.569 fused_ordering(492) 00:17:02.569 fused_ordering(493) 00:17:02.569 fused_ordering(494) 00:17:02.569 fused_ordering(495) 00:17:02.569 fused_ordering(496) 00:17:02.569 fused_ordering(497) 00:17:02.569 fused_ordering(498) 00:17:02.569 fused_ordering(499) 00:17:02.569 fused_ordering(500) 00:17:02.569 fused_ordering(501) 00:17:02.569 fused_ordering(502) 00:17:02.569 fused_ordering(503) 00:17:02.569 fused_ordering(504) 00:17:02.569 fused_ordering(505) 00:17:02.569 fused_ordering(506) 00:17:02.569 fused_ordering(507) 00:17:02.569 fused_ordering(508) 00:17:02.569 fused_ordering(509) 00:17:02.569 fused_ordering(510) 00:17:02.569 fused_ordering(511) 00:17:02.569 fused_ordering(512) 00:17:02.569 fused_ordering(513) 00:17:02.569 fused_ordering(514) 00:17:02.569 fused_ordering(515) 00:17:02.569 fused_ordering(516) 00:17:02.569 fused_ordering(517) 00:17:02.569 fused_ordering(518) 00:17:02.569 fused_ordering(519) 00:17:02.569 fused_ordering(520) 00:17:02.569 fused_ordering(521) 00:17:02.569 fused_ordering(522) 00:17:02.569 fused_ordering(523) 00:17:02.569 fused_ordering(524) 00:17:02.569 fused_ordering(525) 00:17:02.569 fused_ordering(526) 00:17:02.569 fused_ordering(527) 00:17:02.569 fused_ordering(528) 00:17:02.569 fused_ordering(529) 00:17:02.569 fused_ordering(530) 00:17:02.569 fused_ordering(531) 00:17:02.569 fused_ordering(532) 00:17:02.569 fused_ordering(533) 00:17:02.569 fused_ordering(534) 00:17:02.569 fused_ordering(535) 00:17:02.569 fused_ordering(536) 00:17:02.569 fused_ordering(537) 00:17:02.569 fused_ordering(538) 00:17:02.569 fused_ordering(539) 00:17:02.569 fused_ordering(540) 00:17:02.569 fused_ordering(541) 00:17:02.569 fused_ordering(542) 00:17:02.569 fused_ordering(543) 00:17:02.569 fused_ordering(544) 00:17:02.569 fused_ordering(545) 00:17:02.569 fused_ordering(546) 00:17:02.569 fused_ordering(547) 00:17:02.569 fused_ordering(548) 00:17:02.569 fused_ordering(549) 00:17:02.569 fused_ordering(550) 00:17:02.569 fused_ordering(551) 00:17:02.569 fused_ordering(552) 00:17:02.569 fused_ordering(553) 00:17:02.569 fused_ordering(554) 00:17:02.569 fused_ordering(555) 00:17:02.569 fused_ordering(556) 00:17:02.569 fused_ordering(557) 00:17:02.569 fused_ordering(558) 00:17:02.569 fused_ordering(559) 00:17:02.569 fused_ordering(560) 00:17:02.569 fused_ordering(561) 00:17:02.569 fused_ordering(562) 00:17:02.569 fused_ordering(563) 00:17:02.570 fused_ordering(564) 00:17:02.570 fused_ordering(565) 00:17:02.570 fused_ordering(566) 00:17:02.570 fused_ordering(567) 00:17:02.570 fused_ordering(568) 00:17:02.570 fused_ordering(569) 00:17:02.570 fused_ordering(570) 00:17:02.570 fused_ordering(571) 00:17:02.570 fused_ordering(572) 00:17:02.570 fused_ordering(573) 00:17:02.570 fused_ordering(574) 00:17:02.570 fused_ordering(575) 00:17:02.570 fused_ordering(576) 00:17:02.570 fused_ordering(577) 00:17:02.570 fused_ordering(578) 00:17:02.570 fused_ordering(579) 00:17:02.570 fused_ordering(580) 00:17:02.570 fused_ordering(581) 00:17:02.570 fused_ordering(582) 00:17:02.570 fused_ordering(583) 00:17:02.570 fused_ordering(584) 00:17:02.570 fused_ordering(585) 00:17:02.570 fused_ordering(586) 00:17:02.570 fused_ordering(587) 00:17:02.570 fused_ordering(588) 00:17:02.570 fused_ordering(589) 00:17:02.570 fused_ordering(590) 00:17:02.570 fused_ordering(591) 00:17:02.570 fused_ordering(592) 00:17:02.570 fused_ordering(593) 00:17:02.570 fused_ordering(594) 00:17:02.570 fused_ordering(595) 00:17:02.570 fused_ordering(596) 00:17:02.570 fused_ordering(597) 00:17:02.570 fused_ordering(598) 00:17:02.570 fused_ordering(599) 00:17:02.570 fused_ordering(600) 00:17:02.570 fused_ordering(601) 00:17:02.570 fused_ordering(602) 00:17:02.570 fused_ordering(603) 00:17:02.570 fused_ordering(604) 00:17:02.570 fused_ordering(605) 00:17:02.570 fused_ordering(606) 00:17:02.570 fused_ordering(607) 00:17:02.570 fused_ordering(608) 00:17:02.570 fused_ordering(609) 00:17:02.570 fused_ordering(610) 00:17:02.570 fused_ordering(611) 00:17:02.570 fused_ordering(612) 00:17:02.570 fused_ordering(613) 00:17:02.570 fused_ordering(614) 00:17:02.570 fused_ordering(615) 00:17:03.135 fused_ordering(616) 00:17:03.135 fused_ordering(617) 00:17:03.135 fused_ordering(618) 00:17:03.135 fused_ordering(619) 00:17:03.135 fused_ordering(620) 00:17:03.135 fused_ordering(621) 00:17:03.135 fused_ordering(622) 00:17:03.135 fused_ordering(623) 00:17:03.135 fused_ordering(624) 00:17:03.135 fused_ordering(625) 00:17:03.135 fused_ordering(626) 00:17:03.135 fused_ordering(627) 00:17:03.135 fused_ordering(628) 00:17:03.135 fused_ordering(629) 00:17:03.135 fused_ordering(630) 00:17:03.135 fused_ordering(631) 00:17:03.135 fused_ordering(632) 00:17:03.135 fused_ordering(633) 00:17:03.135 fused_ordering(634) 00:17:03.135 fused_ordering(635) 00:17:03.135 fused_ordering(636) 00:17:03.135 fused_ordering(637) 00:17:03.135 fused_ordering(638) 00:17:03.135 fused_ordering(639) 00:17:03.135 fused_ordering(640) 00:17:03.135 fused_ordering(641) 00:17:03.135 fused_ordering(642) 00:17:03.135 fused_ordering(643) 00:17:03.135 fused_ordering(644) 00:17:03.135 fused_ordering(645) 00:17:03.135 fused_ordering(646) 00:17:03.135 fused_ordering(647) 00:17:03.135 fused_ordering(648) 00:17:03.135 fused_ordering(649) 00:17:03.135 fused_ordering(650) 00:17:03.135 fused_ordering(651) 00:17:03.135 fused_ordering(652) 00:17:03.135 fused_ordering(653) 00:17:03.135 fused_ordering(654) 00:17:03.135 fused_ordering(655) 00:17:03.135 fused_ordering(656) 00:17:03.135 fused_ordering(657) 00:17:03.135 fused_ordering(658) 00:17:03.135 fused_ordering(659) 00:17:03.135 fused_ordering(660) 00:17:03.135 fused_ordering(661) 00:17:03.135 fused_ordering(662) 00:17:03.135 fused_ordering(663) 00:17:03.135 fused_ordering(664) 00:17:03.135 fused_ordering(665) 00:17:03.135 fused_ordering(666) 00:17:03.135 fused_ordering(667) 00:17:03.135 fused_ordering(668) 00:17:03.135 fused_ordering(669) 00:17:03.135 fused_ordering(670) 00:17:03.135 fused_ordering(671) 00:17:03.135 fused_ordering(672) 00:17:03.135 fused_ordering(673) 00:17:03.135 fused_ordering(674) 00:17:03.135 fused_ordering(675) 00:17:03.135 fused_ordering(676) 00:17:03.135 fused_ordering(677) 00:17:03.135 fused_ordering(678) 00:17:03.135 fused_ordering(679) 00:17:03.135 fused_ordering(680) 00:17:03.135 fused_ordering(681) 00:17:03.135 fused_ordering(682) 00:17:03.135 fused_ordering(683) 00:17:03.136 fused_ordering(684) 00:17:03.136 fused_ordering(685) 00:17:03.136 fused_ordering(686) 00:17:03.136 fused_ordering(687) 00:17:03.136 fused_ordering(688) 00:17:03.136 fused_ordering(689) 00:17:03.136 fused_ordering(690) 00:17:03.136 fused_ordering(691) 00:17:03.136 fused_ordering(692) 00:17:03.136 fused_ordering(693) 00:17:03.136 fused_ordering(694) 00:17:03.136 fused_ordering(695) 00:17:03.136 fused_ordering(696) 00:17:03.136 fused_ordering(697) 00:17:03.136 fused_ordering(698) 00:17:03.136 fused_ordering(699) 00:17:03.136 fused_ordering(700) 00:17:03.136 fused_ordering(701) 00:17:03.136 fused_ordering(702) 00:17:03.136 fused_ordering(703) 00:17:03.136 fused_ordering(704) 00:17:03.136 fused_ordering(705) 00:17:03.136 fused_ordering(706) 00:17:03.136 fused_ordering(707) 00:17:03.136 fused_ordering(708) 00:17:03.136 fused_ordering(709) 00:17:03.136 fused_ordering(710) 00:17:03.136 fused_ordering(711) 00:17:03.136 fused_ordering(712) 00:17:03.136 fused_ordering(713) 00:17:03.136 fused_ordering(714) 00:17:03.136 fused_ordering(715) 00:17:03.136 fused_ordering(716) 00:17:03.136 fused_ordering(717) 00:17:03.136 fused_ordering(718) 00:17:03.136 fused_ordering(719) 00:17:03.136 fused_ordering(720) 00:17:03.136 fused_ordering(721) 00:17:03.136 fused_ordering(722) 00:17:03.136 fused_ordering(723) 00:17:03.136 fused_ordering(724) 00:17:03.136 fused_ordering(725) 00:17:03.136 fused_ordering(726) 00:17:03.136 fused_ordering(727) 00:17:03.136 fused_ordering(728) 00:17:03.136 fused_ordering(729) 00:17:03.136 fused_ordering(730) 00:17:03.136 fused_ordering(731) 00:17:03.136 fused_ordering(732) 00:17:03.136 fused_ordering(733) 00:17:03.136 fused_ordering(734) 00:17:03.136 fused_ordering(735) 00:17:03.136 fused_ordering(736) 00:17:03.136 fused_ordering(737) 00:17:03.136 fused_ordering(738) 00:17:03.136 fused_ordering(739) 00:17:03.136 fused_ordering(740) 00:17:03.136 fused_ordering(741) 00:17:03.136 fused_ordering(742) 00:17:03.136 fused_ordering(743) 00:17:03.136 fused_ordering(744) 00:17:03.136 fused_ordering(745) 00:17:03.136 fused_ordering(746) 00:17:03.136 fused_ordering(747) 00:17:03.136 fused_ordering(748) 00:17:03.136 fused_ordering(749) 00:17:03.136 fused_ordering(750) 00:17:03.136 fused_ordering(751) 00:17:03.136 fused_ordering(752) 00:17:03.136 fused_ordering(753) 00:17:03.136 fused_ordering(754) 00:17:03.136 fused_ordering(755) 00:17:03.136 fused_ordering(756) 00:17:03.136 fused_ordering(757) 00:17:03.136 fused_ordering(758) 00:17:03.136 fused_ordering(759) 00:17:03.136 fused_ordering(760) 00:17:03.136 fused_ordering(761) 00:17:03.136 fused_ordering(762) 00:17:03.136 fused_ordering(763) 00:17:03.136 fused_ordering(764) 00:17:03.136 fused_ordering(765) 00:17:03.136 fused_ordering(766) 00:17:03.136 fused_ordering(767) 00:17:03.136 fused_ordering(768) 00:17:03.136 fused_ordering(769) 00:17:03.136 fused_ordering(770) 00:17:03.136 fused_ordering(771) 00:17:03.136 fused_ordering(772) 00:17:03.136 fused_ordering(773) 00:17:03.136 fused_ordering(774) 00:17:03.136 fused_ordering(775) 00:17:03.136 fused_ordering(776) 00:17:03.136 fused_ordering(777) 00:17:03.136 fused_ordering(778) 00:17:03.136 fused_ordering(779) 00:17:03.136 fused_ordering(780) 00:17:03.136 fused_ordering(781) 00:17:03.136 fused_ordering(782) 00:17:03.136 fused_ordering(783) 00:17:03.136 fused_ordering(784) 00:17:03.136 fused_ordering(785) 00:17:03.136 fused_ordering(786) 00:17:03.136 fused_ordering(787) 00:17:03.136 fused_ordering(788) 00:17:03.136 fused_ordering(789) 00:17:03.136 fused_ordering(790) 00:17:03.136 fused_ordering(791) 00:17:03.136 fused_ordering(792) 00:17:03.136 fused_ordering(793) 00:17:03.136 fused_ordering(794) 00:17:03.136 fused_ordering(795) 00:17:03.136 fused_ordering(796) 00:17:03.136 fused_ordering(797) 00:17:03.136 fused_ordering(798) 00:17:03.136 fused_ordering(799) 00:17:03.136 fused_ordering(800) 00:17:03.136 fused_ordering(801) 00:17:03.136 fused_ordering(802) 00:17:03.136 fused_ordering(803) 00:17:03.136 fused_ordering(804) 00:17:03.136 fused_ordering(805) 00:17:03.136 fused_ordering(806) 00:17:03.136 fused_ordering(807) 00:17:03.136 fused_ordering(808) 00:17:03.136 fused_ordering(809) 00:17:03.136 fused_ordering(810) 00:17:03.136 fused_ordering(811) 00:17:03.136 fused_ordering(812) 00:17:03.136 fused_ordering(813) 00:17:03.136 fused_ordering(814) 00:17:03.136 fused_ordering(815) 00:17:03.136 fused_ordering(816) 00:17:03.136 fused_ordering(817) 00:17:03.136 fused_ordering(818) 00:17:03.136 fused_ordering(819) 00:17:03.136 fused_ordering(820) 00:17:03.703 fused_ordering(821) 00:17:03.703 fused_ordering(822) 00:17:03.703 fused_ordering(823) 00:17:03.703 fused_ordering(824) 00:17:03.703 fused_ordering(825) 00:17:03.703 fused_ordering(826) 00:17:03.703 fused_ordering(827) 00:17:03.703 fused_ordering(828) 00:17:03.703 fused_ordering(829) 00:17:03.703 fused_ordering(830) 00:17:03.703 fused_ordering(831) 00:17:03.703 fused_ordering(832) 00:17:03.703 fused_ordering(833) 00:17:03.703 fused_ordering(834) 00:17:03.703 fused_ordering(835) 00:17:03.703 fused_ordering(836) 00:17:03.703 fused_ordering(837) 00:17:03.703 fused_ordering(838) 00:17:03.703 fused_ordering(839) 00:17:03.703 fused_ordering(840) 00:17:03.703 fused_ordering(841) 00:17:03.703 fused_ordering(842) 00:17:03.703 fused_ordering(843) 00:17:03.703 fused_ordering(844) 00:17:03.703 fused_ordering(845) 00:17:03.703 fused_ordering(846) 00:17:03.703 fused_ordering(847) 00:17:03.704 fused_ordering(848) 00:17:03.704 fused_ordering(849) 00:17:03.704 fused_ordering(850) 00:17:03.704 fused_ordering(851) 00:17:03.704 fused_ordering(852) 00:17:03.704 fused_ordering(853) 00:17:03.704 fused_ordering(854) 00:17:03.704 fused_ordering(855) 00:17:03.704 fused_ordering(856) 00:17:03.704 fused_ordering(857) 00:17:03.704 fused_ordering(858) 00:17:03.704 fused_ordering(859) 00:17:03.704 fused_ordering(860) 00:17:03.704 fused_ordering(861) 00:17:03.704 fused_ordering(862) 00:17:03.704 fused_ordering(863) 00:17:03.704 fused_ordering(864) 00:17:03.704 fused_ordering(865) 00:17:03.704 fused_ordering(866) 00:17:03.704 fused_ordering(867) 00:17:03.704 fused_ordering(868) 00:17:03.704 fused_ordering(869) 00:17:03.704 fused_ordering(870) 00:17:03.704 fused_ordering(871) 00:17:03.704 fused_ordering(872) 00:17:03.704 fused_ordering(873) 00:17:03.704 fused_ordering(874) 00:17:03.704 fused_ordering(875) 00:17:03.704 fused_ordering(876) 00:17:03.704 fused_ordering(877) 00:17:03.704 fused_ordering(878) 00:17:03.704 fused_ordering(879) 00:17:03.704 fused_ordering(880) 00:17:03.704 fused_ordering(881) 00:17:03.704 fused_ordering(882) 00:17:03.704 fused_ordering(883) 00:17:03.704 fused_ordering(884) 00:17:03.704 fused_ordering(885) 00:17:03.704 fused_ordering(886) 00:17:03.704 fused_ordering(887) 00:17:03.704 fused_ordering(888) 00:17:03.704 fused_ordering(889) 00:17:03.704 fused_ordering(890) 00:17:03.704 fused_ordering(891) 00:17:03.704 fused_ordering(892) 00:17:03.704 fused_ordering(893) 00:17:03.704 fused_ordering(894) 00:17:03.704 fused_ordering(895) 00:17:03.704 fused_ordering(896) 00:17:03.704 fused_ordering(897) 00:17:03.704 fused_ordering(898) 00:17:03.704 fused_ordering(899) 00:17:03.704 fused_ordering(900) 00:17:03.704 fused_ordering(901) 00:17:03.704 fused_ordering(902) 00:17:03.704 fused_ordering(903) 00:17:03.704 fused_ordering(904) 00:17:03.704 fused_ordering(905) 00:17:03.704 fused_ordering(906) 00:17:03.704 fused_ordering(907) 00:17:03.704 fused_ordering(908) 00:17:03.704 fused_ordering(909) 00:17:03.704 fused_ordering(910) 00:17:03.704 fused_ordering(911) 00:17:03.704 fused_ordering(912) 00:17:03.704 fused_ordering(913) 00:17:03.704 fused_ordering(914) 00:17:03.704 fused_ordering(915) 00:17:03.704 fused_ordering(916) 00:17:03.704 fused_ordering(917) 00:17:03.704 fused_ordering(918) 00:17:03.704 fused_ordering(919) 00:17:03.704 fused_ordering(920) 00:17:03.704 fused_ordering(921) 00:17:03.704 fused_ordering(922) 00:17:03.704 fused_ordering(923) 00:17:03.704 fused_ordering(924) 00:17:03.704 fused_ordering(925) 00:17:03.704 fused_ordering(926) 00:17:03.704 fused_ordering(927) 00:17:03.704 fused_ordering(928) 00:17:03.704 fused_ordering(929) 00:17:03.704 fused_ordering(930) 00:17:03.704 fused_ordering(931) 00:17:03.704 fused_ordering(932) 00:17:03.704 fused_ordering(933) 00:17:03.704 fused_ordering(934) 00:17:03.704 fused_ordering(935) 00:17:03.704 fused_ordering(936) 00:17:03.704 fused_ordering(937) 00:17:03.704 fused_ordering(938) 00:17:03.704 fused_ordering(939) 00:17:03.704 fused_ordering(940) 00:17:03.704 fused_ordering(941) 00:17:03.704 fused_ordering(942) 00:17:03.704 fused_ordering(943) 00:17:03.704 fused_ordering(944) 00:17:03.704 fused_ordering(945) 00:17:03.704 fused_ordering(946) 00:17:03.704 fused_ordering(947) 00:17:03.704 fused_ordering(948) 00:17:03.704 fused_ordering(949) 00:17:03.704 fused_ordering(950) 00:17:03.704 fused_ordering(951) 00:17:03.704 fused_ordering(952) 00:17:03.704 fused_ordering(953) 00:17:03.704 fused_ordering(954) 00:17:03.704 fused_ordering(955) 00:17:03.704 fused_ordering(956) 00:17:03.704 fused_ordering(957) 00:17:03.704 fused_ordering(958) 00:17:03.704 fused_ordering(959) 00:17:03.704 fused_ordering(960) 00:17:03.704 fused_ordering(961) 00:17:03.704 fused_ordering(962) 00:17:03.704 fused_ordering(963) 00:17:03.704 fused_ordering(964) 00:17:03.704 fused_ordering(965) 00:17:03.704 fused_ordering(966) 00:17:03.704 fused_ordering(967) 00:17:03.704 fused_ordering(968) 00:17:03.704 fused_ordering(969) 00:17:03.704 fused_ordering(970) 00:17:03.704 fused_ordering(971) 00:17:03.704 fused_ordering(972) 00:17:03.704 fused_ordering(973) 00:17:03.704 fused_ordering(974) 00:17:03.704 fused_ordering(975) 00:17:03.704 fused_ordering(976) 00:17:03.704 fused_ordering(977) 00:17:03.704 fused_ordering(978) 00:17:03.704 fused_ordering(979) 00:17:03.704 fused_ordering(980) 00:17:03.704 fused_ordering(981) 00:17:03.704 fused_ordering(982) 00:17:03.704 fused_ordering(983) 00:17:03.704 fused_ordering(984) 00:17:03.704 fused_ordering(985) 00:17:03.704 fused_ordering(986) 00:17:03.704 fused_ordering(987) 00:17:03.704 fused_ordering(988) 00:17:03.704 fused_ordering(989) 00:17:03.704 fused_ordering(990) 00:17:03.704 fused_ordering(991) 00:17:03.704 fused_ordering(992) 00:17:03.704 fused_ordering(993) 00:17:03.704 fused_ordering(994) 00:17:03.704 fused_ordering(995) 00:17:03.704 fused_ordering(996) 00:17:03.704 fused_ordering(997) 00:17:03.704 fused_ordering(998) 00:17:03.704 fused_ordering(999) 00:17:03.704 fused_ordering(1000) 00:17:03.704 fused_ordering(1001) 00:17:03.704 fused_ordering(1002) 00:17:03.704 fused_ordering(1003) 00:17:03.704 fused_ordering(1004) 00:17:03.704 fused_ordering(1005) 00:17:03.704 fused_ordering(1006) 00:17:03.704 fused_ordering(1007) 00:17:03.704 fused_ordering(1008) 00:17:03.704 fused_ordering(1009) 00:17:03.704 fused_ordering(1010) 00:17:03.704 fused_ordering(1011) 00:17:03.704 fused_ordering(1012) 00:17:03.704 fused_ordering(1013) 00:17:03.704 fused_ordering(1014) 00:17:03.704 fused_ordering(1015) 00:17:03.704 fused_ordering(1016) 00:17:03.704 fused_ordering(1017) 00:17:03.704 fused_ordering(1018) 00:17:03.704 fused_ordering(1019) 00:17:03.704 fused_ordering(1020) 00:17:03.704 fused_ordering(1021) 00:17:03.704 fused_ordering(1022) 00:17:03.704 fused_ordering(1023) 00:17:03.704 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:03.962 rmmod nvme_tcp 00:17:03.962 rmmod nvme_fabrics 00:17:03.962 rmmod nvme_keyring 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 711081 ']' 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 711081 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 711081 ']' 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 711081 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 711081 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 711081' 00:17:03.962 killing process with pid 711081 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 711081 00:17:03.962 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 711081 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.221 18:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.129 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:06.129 00:17:06.129 real 0m7.579s 00:17:06.129 user 0m4.991s 00:17:06.129 sys 0m3.293s 00:17:06.129 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.129 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:06.129 ************************************ 00:17:06.129 END TEST nvmf_fused_ordering 00:17:06.129 ************************************ 00:17:06.129 18:37:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:06.129 18:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:06.129 18:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.129 18:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:06.129 ************************************ 00:17:06.129 START TEST nvmf_ns_masking 00:17:06.129 ************************************ 00:17:06.129 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:06.388 * Looking for test storage... 00:17:06.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:06.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.388 --rc genhtml_branch_coverage=1 00:17:06.388 --rc genhtml_function_coverage=1 00:17:06.388 --rc genhtml_legend=1 00:17:06.388 --rc geninfo_all_blocks=1 00:17:06.388 --rc geninfo_unexecuted_blocks=1 00:17:06.388 00:17:06.388 ' 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:06.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.388 --rc genhtml_branch_coverage=1 00:17:06.388 --rc genhtml_function_coverage=1 00:17:06.388 --rc genhtml_legend=1 00:17:06.388 --rc geninfo_all_blocks=1 00:17:06.388 --rc geninfo_unexecuted_blocks=1 00:17:06.388 00:17:06.388 ' 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:06.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.388 --rc genhtml_branch_coverage=1 00:17:06.388 --rc genhtml_function_coverage=1 00:17:06.388 --rc genhtml_legend=1 00:17:06.388 --rc geninfo_all_blocks=1 00:17:06.388 --rc geninfo_unexecuted_blocks=1 00:17:06.388 00:17:06.388 ' 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:06.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.388 --rc genhtml_branch_coverage=1 00:17:06.388 --rc genhtml_function_coverage=1 00:17:06.388 --rc genhtml_legend=1 00:17:06.388 --rc geninfo_all_blocks=1 00:17:06.388 --rc geninfo_unexecuted_blocks=1 00:17:06.388 00:17:06.388 ' 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.388 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:06.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=71b8bfc4-34cb-4438-a757-9d602812c1c8 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=55d6caa8-adfc-4a16-8fba-1e6cb887a33a 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=14f0ed0b-d843-4204-a969-4dedcb38f838 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:06.389 18:37:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:08.926 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.926 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:08.927 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:08.927 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:08.927 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:08.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:17:08.927 00:17:08.927 --- 10.0.0.2 ping statistics --- 00:17:08.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.927 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:17:08.927 00:17:08.927 --- 10.0.0.1 ping statistics --- 00:17:08.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.927 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=713433 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 713433 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 713433 ']' 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:08.927 [2024-11-17 18:37:55.278661] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:08.927 [2024-11-17 18:37:55.278743] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.927 [2024-11-17 18:37:55.349332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.927 [2024-11-17 18:37:55.390291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.927 [2024-11-17 18:37:55.390349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.927 [2024-11-17 18:37:55.390377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.927 [2024-11-17 18:37:55.390389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.927 [2024-11-17 18:37:55.390399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.927 [2024-11-17 18:37:55.391026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.927 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.928 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:09.186 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.186 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:09.444 [2024-11-17 18:37:55.779501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.444 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:09.444 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:09.444 18:37:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:09.703 Malloc1 00:17:09.703 18:37:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:09.962 Malloc2 00:17:09.962 18:37:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:10.220 18:37:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:10.479 18:37:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.736 [2024-11-17 18:37:57.298446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.995 18:37:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:10.995 18:37:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 14f0ed0b-d843-4204-a969-4dedcb38f838 -a 10.0.0.2 -s 4420 -i 4 00:17:10.995 18:37:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:10.995 18:37:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:10.995 18:37:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:10.995 18:37:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:10.995 18:37:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:13.522 [ 0]:0x1 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:13.522 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:13.523 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f1b92175daab46f0a2b0f7c22f3d4c3a 00:17:13.523 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f1b92175daab46f0a2b0f7c22f3d4c3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.523 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:13.523 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:13.523 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:13.523 18:37:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:13.523 [ 0]:0x1 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f1b92175daab46f0a2b0f7c22f3d4c3a 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f1b92175daab46f0a2b0f7c22f3d4c3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:13.523 [ 1]:0x2 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aacc3a91535d4f8baaebc76ab3943451 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aacc3a91535d4f8baaebc76ab3943451 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:13.523 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:13.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.781 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:14.039 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:14.298 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:14.298 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 14f0ed0b-d843-4204-a969-4dedcb38f838 -a 10.0.0.2 -s 4420 -i 4 00:17:14.298 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:14.298 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:14.298 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.298 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:14.298 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:14.298 18:38:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:16.827 18:38:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:16.827 [ 0]:0x2 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aacc3a91535d4f8baaebc76ab3943451 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aacc3a91535d4f8baaebc76ab3943451 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:16.827 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:16.827 [ 0]:0x1 00:17:16.828 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:16.828 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:17.086 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f1b92175daab46f0a2b0f7c22f3d4c3a 00:17:17.086 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f1b92175daab46f0a2b0f7c22f3d4c3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:17.086 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:17.086 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:17.086 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:17.086 [ 1]:0x2 00:17:17.086 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:17.086 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:17.086 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aacc3a91535d4f8baaebc76ab3943451 00:17:17.086 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aacc3a91535d4f8baaebc76ab3943451 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:17.086 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:17.345 [ 0]:0x2 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aacc3a91535d4f8baaebc76ab3943451 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aacc3a91535d4f8baaebc76ab3943451 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:17.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.345 18:38:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:17.603 18:38:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:17.603 18:38:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 14f0ed0b-d843-4204-a969-4dedcb38f838 -a 10.0.0.2 -s 4420 -i 4 00:17:17.861 18:38:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:17.861 18:38:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:17.861 18:38:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.861 18:38:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:17.861 18:38:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:17.861 18:38:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:19.761 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:19.761 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:19.761 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.761 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:19.761 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.761 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:19.761 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:19.761 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:20.019 [ 0]:0x1 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f1b92175daab46f0a2b0f7c22f3d4c3a 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f1b92175daab46f0a2b0f7c22f3d4c3a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:20.019 [ 1]:0x2 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aacc3a91535d4f8baaebc76ab3943451 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aacc3a91535d4f8baaebc76ab3943451 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.019 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:20.278 [ 0]:0x2 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aacc3a91535d4f8baaebc76ab3943451 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aacc3a91535d4f8baaebc76ab3943451 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:20.278 18:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:20.537 [2024-11-17 18:38:07.027526] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:20.537 request: 00:17:20.537 { 00:17:20.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.537 "nsid": 2, 00:17:20.537 "host": "nqn.2016-06.io.spdk:host1", 00:17:20.537 "method": "nvmf_ns_remove_host", 00:17:20.537 "req_id": 1 00:17:20.537 } 00:17:20.537 Got JSON-RPC error response 00:17:20.537 response: 00:17:20.537 { 00:17:20.537 "code": -32602, 00:17:20.537 "message": "Invalid parameters" 00:17:20.537 } 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:20.537 [ 0]:0x2 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:20.537 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aacc3a91535d4f8baaebc76ab3943451 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aacc3a91535d4f8baaebc76ab3943451 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=714930 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 714930 /var/tmp/host.sock 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 714930 ']' 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:20.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.796 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:20.796 [2024-11-17 18:38:07.244058] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:20.796 [2024-11-17 18:38:07.244144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid714930 ] 00:17:20.796 [2024-11-17 18:38:07.313412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.796 [2024-11-17 18:38:07.358510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.054 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.054 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:21.054 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:21.619 18:38:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:21.619 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 71b8bfc4-34cb-4438-a757-9d602812c1c8 00:17:21.619 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:21.619 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 71B8BFC434CB4438A7579D602812C1C8 -i 00:17:22.184 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 55d6caa8-adfc-4a16-8fba-1e6cb887a33a 00:17:22.184 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:22.184 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 55D6CAA8ADFC4A168FBA1E6CB887A33A -i 00:17:22.184 18:38:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:22.442 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:23.006 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:23.007 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:23.264 nvme0n1 00:17:23.264 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:23.264 18:38:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:23.831 nvme1n2 00:17:23.831 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:23.831 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:23.831 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:23.831 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:23.831 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:24.089 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:24.089 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:24.089 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:24.089 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:24.348 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 71b8bfc4-34cb-4438-a757-9d602812c1c8 == \7\1\b\8\b\f\c\4\-\3\4\c\b\-\4\4\3\8\-\a\7\5\7\-\9\d\6\0\2\8\1\2\c\1\c\8 ]] 00:17:24.348 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:24.348 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:24.348 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:24.606 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 55d6caa8-adfc-4a16-8fba-1e6cb887a33a == \5\5\d\6\c\a\a\8\-\a\d\f\c\-\4\a\1\6\-\8\f\b\a\-\1\e\6\c\b\8\8\7\a\3\3\a ]] 00:17:24.606 18:38:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.864 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 71b8bfc4-34cb-4438-a757-9d602812c1c8 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 71B8BFC434CB4438A7579D602812C1C8 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 71B8BFC434CB4438A7579D602812C1C8 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:25.122 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 71B8BFC434CB4438A7579D602812C1C8 00:17:25.396 [2024-11-17 18:38:11.826140] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:25.396 [2024-11-17 18:38:11.826179] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:25.396 [2024-11-17 18:38:11.826209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.396 request: 00:17:25.396 { 00:17:25.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.396 "namespace": { 00:17:25.396 "bdev_name": "invalid", 00:17:25.396 "nsid": 1, 00:17:25.396 "nguid": "71B8BFC434CB4438A7579D602812C1C8", 00:17:25.396 "no_auto_visible": false 00:17:25.396 }, 00:17:25.396 "method": "nvmf_subsystem_add_ns", 00:17:25.396 "req_id": 1 00:17:25.396 } 00:17:25.396 Got JSON-RPC error response 00:17:25.396 response: 00:17:25.396 { 00:17:25.396 "code": -32602, 00:17:25.396 "message": "Invalid parameters" 00:17:25.396 } 00:17:25.396 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:25.396 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.396 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.396 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.396 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 71b8bfc4-34cb-4438-a757-9d602812c1c8 00:17:25.396 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:25.396 18:38:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 71B8BFC434CB4438A7579D602812C1C8 -i 00:17:25.688 18:38:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:27.589 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:27.589 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:27.589 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:27.847 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:27.847 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 714930 00:17:27.847 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 714930 ']' 00:17:27.847 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 714930 00:17:27.847 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:27.847 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.847 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 714930 00:17:28.105 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:28.105 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:28.105 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 714930' 00:17:28.105 killing process with pid 714930 00:17:28.105 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 714930 00:17:28.105 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 714930 00:17:28.362 18:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.620 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:28.620 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:28.620 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:28.620 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:28.620 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.620 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:28.620 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.620 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.620 rmmod nvme_tcp 00:17:28.620 rmmod nvme_fabrics 00:17:28.620 rmmod nvme_keyring 00:17:28.878 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.878 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:28.878 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:28.878 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 713433 ']' 00:17:28.878 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 713433 00:17:28.878 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 713433 ']' 00:17:28.879 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 713433 00:17:28.879 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:28.879 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.879 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 713433 00:17:28.879 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.879 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.879 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 713433' 00:17:28.879 killing process with pid 713433 00:17:28.879 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 713433 00:17:28.879 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 713433 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.137 18:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.042 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:31.042 00:17:31.042 real 0m24.838s 00:17:31.042 user 0m36.220s 00:17:31.042 sys 0m4.552s 00:17:31.042 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.042 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:31.042 ************************************ 00:17:31.042 END TEST nvmf_ns_masking 00:17:31.042 ************************************ 00:17:31.042 18:38:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:31.042 18:38:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:31.042 18:38:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:31.042 18:38:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.042 18:38:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.042 ************************************ 00:17:31.042 START TEST nvmf_nvme_cli 00:17:31.042 ************************************ 00:17:31.042 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:31.301 * Looking for test storage... 00:17:31.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:31.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.301 --rc genhtml_branch_coverage=1 00:17:31.301 --rc genhtml_function_coverage=1 00:17:31.301 --rc genhtml_legend=1 00:17:31.301 --rc geninfo_all_blocks=1 00:17:31.301 --rc geninfo_unexecuted_blocks=1 00:17:31.301 00:17:31.301 ' 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:31.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.301 --rc genhtml_branch_coverage=1 00:17:31.301 --rc genhtml_function_coverage=1 00:17:31.301 --rc genhtml_legend=1 00:17:31.301 --rc geninfo_all_blocks=1 00:17:31.301 --rc geninfo_unexecuted_blocks=1 00:17:31.301 00:17:31.301 ' 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:31.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.301 --rc genhtml_branch_coverage=1 00:17:31.301 --rc genhtml_function_coverage=1 00:17:31.301 --rc genhtml_legend=1 00:17:31.301 --rc geninfo_all_blocks=1 00:17:31.301 --rc geninfo_unexecuted_blocks=1 00:17:31.301 00:17:31.301 ' 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:31.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.301 --rc genhtml_branch_coverage=1 00:17:31.301 --rc genhtml_function_coverage=1 00:17:31.301 --rc genhtml_legend=1 00:17:31.301 --rc geninfo_all_blocks=1 00:17:31.301 --rc geninfo_unexecuted_blocks=1 00:17:31.301 00:17:31.301 ' 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.301 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:31.302 18:38:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.837 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:33.838 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:33.838 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:33.838 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:33.838 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:33.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:17:33.838 00:17:33.838 --- 10.0.0.2 ping statistics --- 00:17:33.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.838 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:17:33.838 18:38:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:17:33.838 00:17:33.838 --- 10.0.0.1 ping statistics --- 00:17:33.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.838 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=717853 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 717853 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 717853 ']' 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.838 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:33.839 [2024-11-17 18:38:20.079776] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:33.839 [2024-11-17 18:38:20.079864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.839 [2024-11-17 18:38:20.154191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.839 [2024-11-17 18:38:20.206056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.839 [2024-11-17 18:38:20.206105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.839 [2024-11-17 18:38:20.206133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.839 [2024-11-17 18:38:20.206153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.839 [2024-11-17 18:38:20.206162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.839 [2024-11-17 18:38:20.207785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.839 [2024-11-17 18:38:20.207846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.839 [2024-11-17 18:38:20.207906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.839 [2024-11-17 18:38:20.207909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:33.839 [2024-11-17 18:38:20.355275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:33.839 Malloc0 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.839 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:34.098 Malloc1 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:34.098 [2024-11-17 18:38:20.455275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:34.098 00:17:34.098 Discovery Log Number of Records 2, Generation counter 2 00:17:34.098 =====Discovery Log Entry 0====== 00:17:34.098 trtype: tcp 00:17:34.098 adrfam: ipv4 00:17:34.098 subtype: current discovery subsystem 00:17:34.098 treq: not required 00:17:34.098 portid: 0 00:17:34.098 trsvcid: 4420 00:17:34.098 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:34.098 traddr: 10.0.0.2 00:17:34.098 eflags: explicit discovery connections, duplicate discovery information 00:17:34.098 sectype: none 00:17:34.098 =====Discovery Log Entry 1====== 00:17:34.098 trtype: tcp 00:17:34.098 adrfam: ipv4 00:17:34.098 subtype: nvme subsystem 00:17:34.098 treq: not required 00:17:34.098 portid: 0 00:17:34.098 trsvcid: 4420 00:17:34.098 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:34.098 traddr: 10.0.0.2 00:17:34.098 eflags: none 00:17:34.098 sectype: none 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:34.098 18:38:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:35.031 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:35.031 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:35.031 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:35.031 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:35.031 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:35.031 18:38:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:36.930 /dev/nvme0n2 ]] 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:36.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.930 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.930 rmmod nvme_tcp 00:17:37.188 rmmod nvme_fabrics 00:17:37.188 rmmod nvme_keyring 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 717853 ']' 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 717853 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 717853 ']' 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 717853 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 717853 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 717853' 00:17:37.188 killing process with pid 717853 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 717853 00:17:37.188 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 717853 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.448 18:38:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.355 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:39.355 00:17:39.355 real 0m8.322s 00:17:39.355 user 0m15.061s 00:17:39.355 sys 0m2.422s 00:17:39.355 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.355 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:39.355 ************************************ 00:17:39.355 END TEST nvmf_nvme_cli 00:17:39.355 ************************************ 00:17:39.355 18:38:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:39.355 18:38:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:39.355 18:38:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.355 18:38:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.355 18:38:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.614 ************************************ 00:17:39.614 START TEST nvmf_vfio_user 00:17:39.614 ************************************ 00:17:39.614 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:39.614 * Looking for test storage... 00:17:39.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.614 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:39.614 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:17:39.614 18:38:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:39.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.614 --rc genhtml_branch_coverage=1 00:17:39.614 --rc genhtml_function_coverage=1 00:17:39.614 --rc genhtml_legend=1 00:17:39.614 --rc geninfo_all_blocks=1 00:17:39.614 --rc geninfo_unexecuted_blocks=1 00:17:39.614 00:17:39.614 ' 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:39.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.614 --rc genhtml_branch_coverage=1 00:17:39.614 --rc genhtml_function_coverage=1 00:17:39.614 --rc genhtml_legend=1 00:17:39.614 --rc geninfo_all_blocks=1 00:17:39.614 --rc geninfo_unexecuted_blocks=1 00:17:39.614 00:17:39.614 ' 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:39.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.614 --rc genhtml_branch_coverage=1 00:17:39.614 --rc genhtml_function_coverage=1 00:17:39.614 --rc genhtml_legend=1 00:17:39.614 --rc geninfo_all_blocks=1 00:17:39.614 --rc geninfo_unexecuted_blocks=1 00:17:39.614 00:17:39.614 ' 00:17:39.614 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:39.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.614 --rc genhtml_branch_coverage=1 00:17:39.614 --rc genhtml_function_coverage=1 00:17:39.614 --rc genhtml_legend=1 00:17:39.614 --rc geninfo_all_blocks=1 00:17:39.614 --rc geninfo_unexecuted_blocks=1 00:17:39.614 00:17:39.614 ' 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=718776 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 718776' 00:17:39.615 Process pid: 718776 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 718776 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 718776 ']' 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.615 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:39.615 [2024-11-17 18:38:26.140576] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:39.615 [2024-11-17 18:38:26.140682] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.874 [2024-11-17 18:38:26.212772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.874 [2024-11-17 18:38:26.260711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.874 [2024-11-17 18:38:26.260779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.874 [2024-11-17 18:38:26.260792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.874 [2024-11-17 18:38:26.260818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.874 [2024-11-17 18:38:26.260828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.874 [2024-11-17 18:38:26.262274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.874 [2024-11-17 18:38:26.263708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.874 [2024-11-17 18:38:26.263735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.874 [2024-11-17 18:38:26.263738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.874 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.874 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:39.874 18:38:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:41.245 18:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:41.245 18:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:41.245 18:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:41.245 18:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:41.245 18:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:41.245 18:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:41.503 Malloc1 00:17:41.503 18:38:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:41.760 18:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:42.017 18:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:42.275 18:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:42.275 18:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:42.275 18:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:42.532 Malloc2 00:17:42.532 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:43.095 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:43.095 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:43.352 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:43.352 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:43.352 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:43.352 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:43.352 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:43.352 18:38:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:43.612 [2024-11-17 18:38:29.940640] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:17:43.612 [2024-11-17 18:38:29.940704] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid719191 ] 00:17:43.612 [2024-11-17 18:38:29.989428] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:43.612 [2024-11-17 18:38:29.998120] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:43.612 [2024-11-17 18:38:29.998149] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5ff0cff000 00:17:43.612 [2024-11-17 18:38:29.999115] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:43.612 [2024-11-17 18:38:30.000104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:43.612 [2024-11-17 18:38:30.001128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:43.612 [2024-11-17 18:38:30.002123] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:43.612 [2024-11-17 18:38:30.003126] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:43.612 [2024-11-17 18:38:30.004128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:43.612 [2024-11-17 18:38:30.005135] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:43.612 [2024-11-17 18:38:30.006152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:43.612 [2024-11-17 18:38:30.007163] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:43.612 [2024-11-17 18:38:30.007189] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5fef1f5000 00:17:43.612 [2024-11-17 18:38:30.008376] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:43.612 [2024-11-17 18:38:30.028372] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:43.612 [2024-11-17 18:38:30.028427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:43.612 [2024-11-17 18:38:30.031329] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:43.612 [2024-11-17 18:38:30.031392] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:43.612 [2024-11-17 18:38:30.031504] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:43.612 [2024-11-17 18:38:30.031534] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:43.612 [2024-11-17 18:38:30.031546] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:43.612 [2024-11-17 18:38:30.032313] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:43.612 [2024-11-17 18:38:30.032333] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:43.612 [2024-11-17 18:38:30.032347] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:43.612 [2024-11-17 18:38:30.033317] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:43.612 [2024-11-17 18:38:30.033338] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:43.612 [2024-11-17 18:38:30.033360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:43.612 [2024-11-17 18:38:30.034326] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:43.612 [2024-11-17 18:38:30.034346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:43.612 [2024-11-17 18:38:30.035329] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:43.612 [2024-11-17 18:38:30.035347] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:43.612 [2024-11-17 18:38:30.035356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:43.612 [2024-11-17 18:38:30.035368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:43.612 [2024-11-17 18:38:30.035478] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:43.612 [2024-11-17 18:38:30.035486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:43.612 [2024-11-17 18:38:30.035495] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:43.612 [2024-11-17 18:38:30.036345] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:43.612 [2024-11-17 18:38:30.037340] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:43.612 [2024-11-17 18:38:30.038349] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:43.612 [2024-11-17 18:38:30.039346] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:43.612 [2024-11-17 18:38:30.039531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:43.612 [2024-11-17 18:38:30.040365] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:43.612 [2024-11-17 18:38:30.040384] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:43.612 [2024-11-17 18:38:30.040394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:43.612 [2024-11-17 18:38:30.040418] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:43.612 [2024-11-17 18:38:30.040437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:43.612 [2024-11-17 18:38:30.040464] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:43.612 [2024-11-17 18:38:30.040474] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:43.612 [2024-11-17 18:38:30.040481] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.612 [2024-11-17 18:38:30.040501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:43.612 [2024-11-17 18:38:30.040608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:43.612 [2024-11-17 18:38:30.040628] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:43.612 [2024-11-17 18:38:30.040652] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:43.612 [2024-11-17 18:38:30.040659] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:43.612 [2024-11-17 18:38:30.040667] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:43.612 [2024-11-17 18:38:30.040689] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:43.613 [2024-11-17 18:38:30.040715] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:43.613 [2024-11-17 18:38:30.040724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.040749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.040767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.040790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.040808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.613 [2024-11-17 18:38:30.040823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.613 [2024-11-17 18:38:30.040837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.613 [2024-11-17 18:38:30.040850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.613 [2024-11-17 18:38:30.040860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.040872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.040886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.040900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.040915] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:43.613 [2024-11-17 18:38:30.040926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.040937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.040947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.040961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.040988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.041076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041108] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:43.613 [2024-11-17 18:38:30.041116] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:43.613 [2024-11-17 18:38:30.041122] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.613 [2024-11-17 18:38:30.041131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.041156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.041173] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:43.613 [2024-11-17 18:38:30.041191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041218] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:43.613 [2024-11-17 18:38:30.041226] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:43.613 [2024-11-17 18:38:30.041232] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.613 [2024-11-17 18:38:30.041241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.041289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.041312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041339] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:43.613 [2024-11-17 18:38:30.041347] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:43.613 [2024-11-17 18:38:30.041353] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.613 [2024-11-17 18:38:30.041362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.041381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.041395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041431] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041459] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:43.613 [2024-11-17 18:38:30.041466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:43.613 [2024-11-17 18:38:30.041475] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:43.613 [2024-11-17 18:38:30.041503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.041522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.041543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.041559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.041575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.041588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.041604] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.041617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.041640] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:43.613 [2024-11-17 18:38:30.041650] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:43.613 [2024-11-17 18:38:30.041672] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:43.613 [2024-11-17 18:38:30.041688] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:43.613 [2024-11-17 18:38:30.041695] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:43.613 [2024-11-17 18:38:30.041705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:43.613 [2024-11-17 18:38:30.041725] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:43.613 [2024-11-17 18:38:30.041734] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:43.613 [2024-11-17 18:38:30.041741] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.613 [2024-11-17 18:38:30.041750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.041762] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:43.613 [2024-11-17 18:38:30.041770] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:43.613 [2024-11-17 18:38:30.041777] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.613 [2024-11-17 18:38:30.041786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.041799] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:43.613 [2024-11-17 18:38:30.041812] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:43.613 [2024-11-17 18:38:30.041819] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.613 [2024-11-17 18:38:30.041829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:43.613 [2024-11-17 18:38:30.041842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.041863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.041884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:43.613 [2024-11-17 18:38:30.041898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:43.613 ===================================================== 00:17:43.613 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:43.614 ===================================================== 00:17:43.614 Controller Capabilities/Features 00:17:43.614 ================================ 00:17:43.614 Vendor ID: 4e58 00:17:43.614 Subsystem Vendor ID: 4e58 00:17:43.614 Serial Number: SPDK1 00:17:43.614 Model Number: SPDK bdev Controller 00:17:43.614 Firmware Version: 25.01 00:17:43.614 Recommended Arb Burst: 6 00:17:43.614 IEEE OUI Identifier: 8d 6b 50 00:17:43.614 Multi-path I/O 00:17:43.614 May have multiple subsystem ports: Yes 00:17:43.614 May have multiple controllers: Yes 00:17:43.614 Associated with SR-IOV VF: No 00:17:43.614 Max Data Transfer Size: 131072 00:17:43.614 Max Number of Namespaces: 32 00:17:43.614 Max Number of I/O Queues: 127 00:17:43.614 NVMe Specification Version (VS): 1.3 00:17:43.614 NVMe Specification Version (Identify): 1.3 00:17:43.614 Maximum Queue Entries: 256 00:17:43.614 Contiguous Queues Required: Yes 00:17:43.614 Arbitration Mechanisms Supported 00:17:43.614 Weighted Round Robin: Not Supported 00:17:43.614 Vendor Specific: Not Supported 00:17:43.614 Reset Timeout: 15000 ms 00:17:43.614 Doorbell Stride: 4 bytes 00:17:43.614 NVM Subsystem Reset: Not Supported 00:17:43.614 Command Sets Supported 00:17:43.614 NVM Command Set: Supported 00:17:43.614 Boot Partition: Not Supported 00:17:43.614 Memory Page Size Minimum: 4096 bytes 00:17:43.614 Memory Page Size Maximum: 4096 bytes 00:17:43.614 Persistent Memory Region: Not Supported 00:17:43.614 Optional Asynchronous Events Supported 00:17:43.614 Namespace Attribute Notices: Supported 00:17:43.614 Firmware Activation Notices: Not Supported 00:17:43.614 ANA Change Notices: Not Supported 00:17:43.614 PLE Aggregate Log Change Notices: Not Supported 00:17:43.614 LBA Status Info Alert Notices: Not Supported 00:17:43.614 EGE Aggregate Log Change Notices: Not Supported 00:17:43.614 Normal NVM Subsystem Shutdown event: Not Supported 00:17:43.614 Zone Descriptor Change Notices: Not Supported 00:17:43.614 Discovery Log Change Notices: Not Supported 00:17:43.614 Controller Attributes 00:17:43.614 128-bit Host Identifier: Supported 00:17:43.614 Non-Operational Permissive Mode: Not Supported 00:17:43.614 NVM Sets: Not Supported 00:17:43.614 Read Recovery Levels: Not Supported 00:17:43.614 Endurance Groups: Not Supported 00:17:43.614 Predictable Latency Mode: Not Supported 00:17:43.614 Traffic Based Keep ALive: Not Supported 00:17:43.614 Namespace Granularity: Not Supported 00:17:43.614 SQ Associations: Not Supported 00:17:43.614 UUID List: Not Supported 00:17:43.614 Multi-Domain Subsystem: Not Supported 00:17:43.614 Fixed Capacity Management: Not Supported 00:17:43.614 Variable Capacity Management: Not Supported 00:17:43.614 Delete Endurance Group: Not Supported 00:17:43.614 Delete NVM Set: Not Supported 00:17:43.614 Extended LBA Formats Supported: Not Supported 00:17:43.614 Flexible Data Placement Supported: Not Supported 00:17:43.614 00:17:43.614 Controller Memory Buffer Support 00:17:43.614 ================================ 00:17:43.614 Supported: No 00:17:43.614 00:17:43.614 Persistent Memory Region Support 00:17:43.614 ================================ 00:17:43.614 Supported: No 00:17:43.614 00:17:43.614 Admin Command Set Attributes 00:17:43.614 ============================ 00:17:43.614 Security Send/Receive: Not Supported 00:17:43.614 Format NVM: Not Supported 00:17:43.614 Firmware Activate/Download: Not Supported 00:17:43.614 Namespace Management: Not Supported 00:17:43.614 Device Self-Test: Not Supported 00:17:43.614 Directives: Not Supported 00:17:43.614 NVMe-MI: Not Supported 00:17:43.614 Virtualization Management: Not Supported 00:17:43.614 Doorbell Buffer Config: Not Supported 00:17:43.614 Get LBA Status Capability: Not Supported 00:17:43.614 Command & Feature Lockdown Capability: Not Supported 00:17:43.614 Abort Command Limit: 4 00:17:43.614 Async Event Request Limit: 4 00:17:43.614 Number of Firmware Slots: N/A 00:17:43.614 Firmware Slot 1 Read-Only: N/A 00:17:43.614 Firmware Activation Without Reset: N/A 00:17:43.614 Multiple Update Detection Support: N/A 00:17:43.614 Firmware Update Granularity: No Information Provided 00:17:43.614 Per-Namespace SMART Log: No 00:17:43.614 Asymmetric Namespace Access Log Page: Not Supported 00:17:43.614 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:43.614 Command Effects Log Page: Supported 00:17:43.614 Get Log Page Extended Data: Supported 00:17:43.614 Telemetry Log Pages: Not Supported 00:17:43.614 Persistent Event Log Pages: Not Supported 00:17:43.614 Supported Log Pages Log Page: May Support 00:17:43.614 Commands Supported & Effects Log Page: Not Supported 00:17:43.614 Feature Identifiers & Effects Log Page:May Support 00:17:43.614 NVMe-MI Commands & Effects Log Page: May Support 00:17:43.614 Data Area 4 for Telemetry Log: Not Supported 00:17:43.614 Error Log Page Entries Supported: 128 00:17:43.614 Keep Alive: Supported 00:17:43.614 Keep Alive Granularity: 10000 ms 00:17:43.614 00:17:43.614 NVM Command Set Attributes 00:17:43.614 ========================== 00:17:43.614 Submission Queue Entry Size 00:17:43.614 Max: 64 00:17:43.614 Min: 64 00:17:43.614 Completion Queue Entry Size 00:17:43.614 Max: 16 00:17:43.614 Min: 16 00:17:43.614 Number of Namespaces: 32 00:17:43.614 Compare Command: Supported 00:17:43.614 Write Uncorrectable Command: Not Supported 00:17:43.614 Dataset Management Command: Supported 00:17:43.614 Write Zeroes Command: Supported 00:17:43.614 Set Features Save Field: Not Supported 00:17:43.614 Reservations: Not Supported 00:17:43.614 Timestamp: Not Supported 00:17:43.614 Copy: Supported 00:17:43.614 Volatile Write Cache: Present 00:17:43.614 Atomic Write Unit (Normal): 1 00:17:43.614 Atomic Write Unit (PFail): 1 00:17:43.614 Atomic Compare & Write Unit: 1 00:17:43.614 Fused Compare & Write: Supported 00:17:43.614 Scatter-Gather List 00:17:43.614 SGL Command Set: Supported (Dword aligned) 00:17:43.614 SGL Keyed: Not Supported 00:17:43.614 SGL Bit Bucket Descriptor: Not Supported 00:17:43.614 SGL Metadata Pointer: Not Supported 00:17:43.614 Oversized SGL: Not Supported 00:17:43.614 SGL Metadata Address: Not Supported 00:17:43.614 SGL Offset: Not Supported 00:17:43.614 Transport SGL Data Block: Not Supported 00:17:43.614 Replay Protected Memory Block: Not Supported 00:17:43.614 00:17:43.614 Firmware Slot Information 00:17:43.614 ========================= 00:17:43.614 Active slot: 1 00:17:43.614 Slot 1 Firmware Revision: 25.01 00:17:43.614 00:17:43.614 00:17:43.614 Commands Supported and Effects 00:17:43.614 ============================== 00:17:43.614 Admin Commands 00:17:43.614 -------------- 00:17:43.614 Get Log Page (02h): Supported 00:17:43.614 Identify (06h): Supported 00:17:43.614 Abort (08h): Supported 00:17:43.614 Set Features (09h): Supported 00:17:43.614 Get Features (0Ah): Supported 00:17:43.614 Asynchronous Event Request (0Ch): Supported 00:17:43.614 Keep Alive (18h): Supported 00:17:43.614 I/O Commands 00:17:43.614 ------------ 00:17:43.614 Flush (00h): Supported LBA-Change 00:17:43.614 Write (01h): Supported LBA-Change 00:17:43.614 Read (02h): Supported 00:17:43.614 Compare (05h): Supported 00:17:43.614 Write Zeroes (08h): Supported LBA-Change 00:17:43.614 Dataset Management (09h): Supported LBA-Change 00:17:43.614 Copy (19h): Supported LBA-Change 00:17:43.614 00:17:43.614 Error Log 00:17:43.614 ========= 00:17:43.614 00:17:43.614 Arbitration 00:17:43.614 =========== 00:17:43.614 Arbitration Burst: 1 00:17:43.614 00:17:43.614 Power Management 00:17:43.614 ================ 00:17:43.614 Number of Power States: 1 00:17:43.614 Current Power State: Power State #0 00:17:43.614 Power State #0: 00:17:43.614 Max Power: 0.00 W 00:17:43.614 Non-Operational State: Operational 00:17:43.614 Entry Latency: Not Reported 00:17:43.614 Exit Latency: Not Reported 00:17:43.614 Relative Read Throughput: 0 00:17:43.614 Relative Read Latency: 0 00:17:43.614 Relative Write Throughput: 0 00:17:43.614 Relative Write Latency: 0 00:17:43.614 Idle Power: Not Reported 00:17:43.614 Active Power: Not Reported 00:17:43.614 Non-Operational Permissive Mode: Not Supported 00:17:43.614 00:17:43.614 Health Information 00:17:43.614 ================== 00:17:43.614 Critical Warnings: 00:17:43.614 Available Spare Space: OK 00:17:43.614 Temperature: OK 00:17:43.614 Device Reliability: OK 00:17:43.614 Read Only: No 00:17:43.614 Volatile Memory Backup: OK 00:17:43.614 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:43.614 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:43.614 Available Spare: 0% 00:17:43.614 Available Sp[2024-11-17 18:38:30.042060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:43.615 [2024-11-17 18:38:30.042078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:43.615 [2024-11-17 18:38:30.042123] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:43.615 [2024-11-17 18:38:30.042141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.615 [2024-11-17 18:38:30.042153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.615 [2024-11-17 18:38:30.042163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.615 [2024-11-17 18:38:30.042172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.615 [2024-11-17 18:38:30.042377] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:43.615 [2024-11-17 18:38:30.042398] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:43.615 [2024-11-17 18:38:30.043382] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:43.615 [2024-11-17 18:38:30.043510] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:43.615 [2024-11-17 18:38:30.043524] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:43.615 [2024-11-17 18:38:30.044386] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:43.615 [2024-11-17 18:38:30.044409] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:43.615 [2024-11-17 18:38:30.044602] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:43.615 [2024-11-17 18:38:30.049686] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:43.615 are Threshold: 0% 00:17:43.615 Life Percentage Used: 0% 00:17:43.615 Data Units Read: 0 00:17:43.615 Data Units Written: 0 00:17:43.615 Host Read Commands: 0 00:17:43.615 Host Write Commands: 0 00:17:43.615 Controller Busy Time: 0 minutes 00:17:43.615 Power Cycles: 0 00:17:43.615 Power On Hours: 0 hours 00:17:43.615 Unsafe Shutdowns: 0 00:17:43.615 Unrecoverable Media Errors: 0 00:17:43.615 Lifetime Error Log Entries: 0 00:17:43.615 Warning Temperature Time: 0 minutes 00:17:43.615 Critical Temperature Time: 0 minutes 00:17:43.615 00:17:43.615 Number of Queues 00:17:43.615 ================ 00:17:43.615 Number of I/O Submission Queues: 127 00:17:43.615 Number of I/O Completion Queues: 127 00:17:43.615 00:17:43.615 Active Namespaces 00:17:43.615 ================= 00:17:43.615 Namespace ID:1 00:17:43.615 Error Recovery Timeout: Unlimited 00:17:43.615 Command Set Identifier: NVM (00h) 00:17:43.615 Deallocate: Supported 00:17:43.615 Deallocated/Unwritten Error: Not Supported 00:17:43.615 Deallocated Read Value: Unknown 00:17:43.615 Deallocate in Write Zeroes: Not Supported 00:17:43.615 Deallocated Guard Field: 0xFFFF 00:17:43.615 Flush: Supported 00:17:43.615 Reservation: Supported 00:17:43.615 Namespace Sharing Capabilities: Multiple Controllers 00:17:43.615 Size (in LBAs): 131072 (0GiB) 00:17:43.615 Capacity (in LBAs): 131072 (0GiB) 00:17:43.615 Utilization (in LBAs): 131072 (0GiB) 00:17:43.615 NGUID: 8DFD2ECCCFC1488C9C69894E4E29EE35 00:17:43.615 UUID: 8dfd2ecc-cfc1-488c-9c69-894e4e29ee35 00:17:43.615 Thin Provisioning: Not Supported 00:17:43.615 Per-NS Atomic Units: Yes 00:17:43.615 Atomic Boundary Size (Normal): 0 00:17:43.615 Atomic Boundary Size (PFail): 0 00:17:43.615 Atomic Boundary Offset: 0 00:17:43.615 Maximum Single Source Range Length: 65535 00:17:43.615 Maximum Copy Length: 65535 00:17:43.615 Maximum Source Range Count: 1 00:17:43.615 NGUID/EUI64 Never Reused: No 00:17:43.615 Namespace Write Protected: No 00:17:43.615 Number of LBA Formats: 1 00:17:43.615 Current LBA Format: LBA Format #00 00:17:43.615 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:43.615 00:17:43.615 18:38:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:43.873 [2024-11-17 18:38:30.302718] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:49.136 Initializing NVMe Controllers 00:17:49.136 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:49.136 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:49.136 Initialization complete. Launching workers. 00:17:49.136 ======================================================== 00:17:49.136 Latency(us) 00:17:49.136 Device Information : IOPS MiB/s Average min max 00:17:49.136 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32813.99 128.18 3901.89 1175.46 7438.83 00:17:49.136 ======================================================== 00:17:49.136 Total : 32813.99 128.18 3901.89 1175.46 7438.83 00:17:49.136 00:17:49.136 [2024-11-17 18:38:35.328444] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:49.136 18:38:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:49.136 [2024-11-17 18:38:35.579566] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:54.400 Initializing NVMe Controllers 00:17:54.400 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:54.400 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:54.400 Initialization complete. Launching workers. 00:17:54.400 ======================================================== 00:17:54.400 Latency(us) 00:17:54.400 Device Information : IOPS MiB/s Average min max 00:17:54.400 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15914.93 62.17 8042.06 6968.83 15966.10 00:17:54.400 ======================================================== 00:17:54.400 Total : 15914.93 62.17 8042.06 6968.83 15966.10 00:17:54.400 00:17:54.400 [2024-11-17 18:38:40.609817] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:54.400 18:38:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:54.400 [2024-11-17 18:38:40.842955] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:59.663 [2024-11-17 18:38:45.930055] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:59.663 Initializing NVMe Controllers 00:17:59.663 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:59.663 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:59.663 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:59.663 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:59.663 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:59.663 Initialization complete. Launching workers. 00:17:59.663 Starting thread on core 2 00:17:59.663 Starting thread on core 3 00:17:59.663 Starting thread on core 1 00:17:59.663 18:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:59.921 [2024-11-17 18:38:46.253143] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:03.200 [2024-11-17 18:38:49.320647] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:03.200 Initializing NVMe Controllers 00:18:03.200 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:03.201 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:03.201 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:03.201 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:03.201 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:03.201 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:03.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:03.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:03.201 Initialization complete. Launching workers. 00:18:03.201 Starting thread on core 1 with urgent priority queue 00:18:03.201 Starting thread on core 2 with urgent priority queue 00:18:03.201 Starting thread on core 3 with urgent priority queue 00:18:03.201 Starting thread on core 0 with urgent priority queue 00:18:03.201 SPDK bdev Controller (SPDK1 ) core 0: 4803.33 IO/s 20.82 secs/100000 ios 00:18:03.201 SPDK bdev Controller (SPDK1 ) core 1: 6091.00 IO/s 16.42 secs/100000 ios 00:18:03.201 SPDK bdev Controller (SPDK1 ) core 2: 5925.67 IO/s 16.88 secs/100000 ios 00:18:03.201 SPDK bdev Controller (SPDK1 ) core 3: 5738.00 IO/s 17.43 secs/100000 ios 00:18:03.201 ======================================================== 00:18:03.201 00:18:03.201 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:03.201 [2024-11-17 18:38:49.629443] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:03.201 Initializing NVMe Controllers 00:18:03.201 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:03.201 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:03.201 Namespace ID: 1 size: 0GB 00:18:03.201 Initialization complete. 00:18:03.201 INFO: using host memory buffer for IO 00:18:03.201 Hello world! 00:18:03.201 [2024-11-17 18:38:49.663049] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:03.201 18:38:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:03.458 [2024-11-17 18:38:49.969106] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:04.830 Initializing NVMe Controllers 00:18:04.830 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:04.830 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:04.830 Initialization complete. Launching workers. 00:18:04.830 submit (in ns) avg, min, max = 6841.3, 3495.6, 4016570.0 00:18:04.830 complete (in ns) avg, min, max = 26428.7, 2081.1, 4998718.9 00:18:04.830 00:18:04.830 Submit histogram 00:18:04.830 ================ 00:18:04.830 Range in us Cumulative Count 00:18:04.830 3.484 - 3.508: 0.0718% ( 9) 00:18:04.830 3.508 - 3.532: 0.5741% ( 63) 00:18:04.830 3.532 - 3.556: 1.7780% ( 151) 00:18:04.830 3.556 - 3.579: 5.1188% ( 419) 00:18:04.830 3.579 - 3.603: 10.6682% ( 696) 00:18:04.830 3.603 - 3.627: 19.0560% ( 1052) 00:18:04.830 3.627 - 3.650: 27.8026% ( 1097) 00:18:04.830 3.650 - 3.674: 35.5207% ( 968) 00:18:04.830 3.674 - 3.698: 41.9152% ( 802) 00:18:04.830 3.698 - 3.721: 47.9828% ( 761) 00:18:04.830 3.721 - 3.745: 52.2963% ( 541) 00:18:04.830 3.745 - 3.769: 56.2590% ( 497) 00:18:04.830 3.769 - 3.793: 59.9585% ( 464) 00:18:04.830 3.793 - 3.816: 63.6262% ( 460) 00:18:04.830 3.816 - 3.840: 67.3417% ( 466) 00:18:04.830 3.840 - 3.864: 71.9184% ( 574) 00:18:04.830 3.864 - 3.887: 76.4790% ( 572) 00:18:04.830 3.887 - 3.911: 80.0510% ( 448) 00:18:04.830 3.911 - 3.935: 83.4636% ( 428) 00:18:04.830 3.935 - 3.959: 85.6243% ( 271) 00:18:04.830 3.959 - 3.982: 87.4023% ( 223) 00:18:04.830 3.982 - 4.006: 89.0847% ( 211) 00:18:04.830 4.006 - 4.030: 90.5119% ( 179) 00:18:04.830 4.030 - 4.053: 91.4846% ( 122) 00:18:04.830 4.053 - 4.077: 92.3936% ( 114) 00:18:04.830 4.077 - 4.101: 93.1989% ( 101) 00:18:04.830 4.101 - 4.124: 94.1477% ( 119) 00:18:04.830 4.124 - 4.148: 94.6659% ( 65) 00:18:04.830 4.148 - 4.172: 95.1603% ( 62) 00:18:04.830 4.172 - 4.196: 95.5430% ( 48) 00:18:04.830 4.196 - 4.219: 95.7742% ( 29) 00:18:04.830 4.219 - 4.243: 95.9974% ( 28) 00:18:04.830 4.243 - 4.267: 96.1888% ( 24) 00:18:04.830 4.267 - 4.290: 96.3164% ( 16) 00:18:04.830 4.290 - 4.314: 96.3881% ( 9) 00:18:04.831 4.314 - 4.338: 96.4679% ( 10) 00:18:04.831 4.338 - 4.361: 96.5317% ( 8) 00:18:04.831 4.361 - 4.385: 96.6034% ( 9) 00:18:04.831 4.385 - 4.409: 96.6672% ( 8) 00:18:04.831 4.409 - 4.433: 96.7310% ( 8) 00:18:04.831 4.433 - 4.456: 96.7708% ( 5) 00:18:04.831 4.456 - 4.480: 96.8187% ( 6) 00:18:04.831 4.480 - 4.504: 96.8346% ( 2) 00:18:04.831 4.527 - 4.551: 96.8426% ( 1) 00:18:04.831 4.599 - 4.622: 96.8745% ( 4) 00:18:04.831 4.646 - 4.670: 96.8904% ( 2) 00:18:04.831 4.670 - 4.693: 96.9542% ( 8) 00:18:04.831 4.693 - 4.717: 97.0021% ( 6) 00:18:04.831 4.717 - 4.741: 97.0659% ( 8) 00:18:04.831 4.741 - 4.764: 97.1137% ( 6) 00:18:04.831 4.764 - 4.788: 97.1695% ( 7) 00:18:04.831 4.788 - 4.812: 97.1934% ( 3) 00:18:04.831 4.812 - 4.836: 97.2652% ( 9) 00:18:04.831 4.836 - 4.859: 97.3210% ( 7) 00:18:04.831 4.859 - 4.883: 97.3768% ( 7) 00:18:04.831 4.883 - 4.907: 97.4007% ( 3) 00:18:04.831 4.907 - 4.930: 97.4805% ( 10) 00:18:04.831 4.930 - 4.954: 97.5363% ( 7) 00:18:04.831 4.954 - 4.978: 97.5682% ( 4) 00:18:04.831 4.978 - 5.001: 97.6240% ( 7) 00:18:04.831 5.001 - 5.025: 97.6320% ( 1) 00:18:04.831 5.025 - 5.049: 97.6559% ( 3) 00:18:04.831 5.049 - 5.073: 97.6878% ( 4) 00:18:04.831 5.096 - 5.120: 97.7117% ( 3) 00:18:04.831 5.120 - 5.144: 97.7197% ( 1) 00:18:04.831 5.144 - 5.167: 97.7356% ( 2) 00:18:04.831 5.167 - 5.191: 97.7516% ( 2) 00:18:04.831 5.191 - 5.215: 97.7595% ( 1) 00:18:04.831 5.215 - 5.239: 97.7755% ( 2) 00:18:04.831 5.262 - 5.286: 97.7834% ( 1) 00:18:04.831 5.286 - 5.310: 97.7994% ( 2) 00:18:04.831 5.310 - 5.333: 97.8074% ( 1) 00:18:04.831 5.381 - 5.404: 97.8233% ( 2) 00:18:04.831 5.428 - 5.452: 97.8313% ( 1) 00:18:04.831 5.476 - 5.499: 97.8393% ( 1) 00:18:04.831 5.499 - 5.523: 97.8472% ( 1) 00:18:04.831 5.594 - 5.618: 97.8632% ( 2) 00:18:04.831 5.618 - 5.641: 97.8712% ( 1) 00:18:04.831 5.736 - 5.760: 97.8791% ( 1) 00:18:04.831 5.760 - 5.784: 97.8871% ( 1) 00:18:04.831 5.807 - 5.831: 97.8951% ( 1) 00:18:04.831 5.831 - 5.855: 97.9030% ( 1) 00:18:04.831 5.902 - 5.926: 97.9110% ( 1) 00:18:04.831 6.163 - 6.210: 97.9190% ( 1) 00:18:04.831 6.210 - 6.258: 97.9270% ( 1) 00:18:04.831 6.258 - 6.305: 97.9349% ( 1) 00:18:04.831 6.353 - 6.400: 97.9429% ( 1) 00:18:04.831 6.400 - 6.447: 97.9509% ( 1) 00:18:04.831 6.542 - 6.590: 97.9748% ( 3) 00:18:04.831 6.779 - 6.827: 97.9908% ( 2) 00:18:04.831 7.016 - 7.064: 97.9987% ( 1) 00:18:04.831 7.111 - 7.159: 98.0067% ( 1) 00:18:04.831 7.396 - 7.443: 98.0226% ( 2) 00:18:04.831 7.443 - 7.490: 98.0386% ( 2) 00:18:04.831 7.680 - 7.727: 98.0466% ( 1) 00:18:04.831 7.775 - 7.822: 98.0545% ( 1) 00:18:04.831 7.822 - 7.870: 98.0625% ( 1) 00:18:04.831 7.964 - 8.012: 98.0785% ( 2) 00:18:04.831 8.012 - 8.059: 98.0944% ( 2) 00:18:04.831 8.059 - 8.107: 98.1103% ( 2) 00:18:04.831 8.154 - 8.201: 98.1263% ( 2) 00:18:04.831 8.296 - 8.344: 98.1343% ( 1) 00:18:04.831 8.486 - 8.533: 98.1502% ( 2) 00:18:04.831 8.533 - 8.581: 98.1662% ( 2) 00:18:04.831 8.676 - 8.723: 98.1741% ( 1) 00:18:04.831 8.723 - 8.770: 98.2060% ( 4) 00:18:04.831 8.770 - 8.818: 98.2140% ( 1) 00:18:04.831 8.913 - 8.960: 98.2220% ( 1) 00:18:04.831 9.007 - 9.055: 98.2299% ( 1) 00:18:04.831 9.055 - 9.102: 98.2539% ( 3) 00:18:04.831 9.197 - 9.244: 98.2778% ( 3) 00:18:04.831 9.434 - 9.481: 98.2858% ( 1) 00:18:04.831 9.481 - 9.529: 98.3017% ( 2) 00:18:04.831 9.529 - 9.576: 98.3097% ( 1) 00:18:04.831 9.576 - 9.624: 98.3336% ( 3) 00:18:04.831 9.624 - 9.671: 98.3416% ( 1) 00:18:04.831 9.671 - 9.719: 98.3495% ( 1) 00:18:04.831 9.719 - 9.766: 98.3575% ( 1) 00:18:04.831 9.766 - 9.813: 98.3655% ( 1) 00:18:04.831 9.813 - 9.861: 98.3894% ( 3) 00:18:04.831 9.908 - 9.956: 98.3974% ( 1) 00:18:04.831 10.050 - 10.098: 98.4054% ( 1) 00:18:04.831 10.145 - 10.193: 98.4133% ( 1) 00:18:04.831 10.240 - 10.287: 98.4213% ( 1) 00:18:04.831 10.335 - 10.382: 98.4373% ( 2) 00:18:04.831 10.430 - 10.477: 98.4532% ( 2) 00:18:04.831 10.477 - 10.524: 98.4612% ( 1) 00:18:04.831 10.524 - 10.572: 98.4691% ( 1) 00:18:04.831 10.619 - 10.667: 98.4771% ( 1) 00:18:04.831 10.667 - 10.714: 98.4931% ( 2) 00:18:04.831 10.714 - 10.761: 98.5010% ( 1) 00:18:04.831 10.809 - 10.856: 98.5170% ( 2) 00:18:04.831 10.856 - 10.904: 98.5329% ( 2) 00:18:04.831 10.951 - 10.999: 98.5409% ( 1) 00:18:04.831 10.999 - 11.046: 98.5489% ( 1) 00:18:04.831 11.046 - 11.093: 98.5568% ( 1) 00:18:04.831 11.141 - 11.188: 98.5648% ( 1) 00:18:04.831 11.236 - 11.283: 98.5887% ( 3) 00:18:04.831 11.330 - 11.378: 98.6047% ( 2) 00:18:04.831 11.473 - 11.520: 98.6127% ( 1) 00:18:04.831 11.520 - 11.567: 98.6206% ( 1) 00:18:04.831 11.567 - 11.615: 98.6286% ( 1) 00:18:04.831 11.615 - 11.662: 98.6525% ( 3) 00:18:04.831 11.662 - 11.710: 98.6605% ( 1) 00:18:04.831 11.804 - 11.852: 98.6685% ( 1) 00:18:04.831 11.899 - 11.947: 98.6764% ( 1) 00:18:04.831 12.089 - 12.136: 98.6844% ( 1) 00:18:04.831 12.231 - 12.326: 98.6924% ( 1) 00:18:04.831 12.421 - 12.516: 98.7004% ( 1) 00:18:04.831 12.516 - 12.610: 98.7083% ( 1) 00:18:04.831 12.610 - 12.705: 98.7163% ( 1) 00:18:04.831 12.800 - 12.895: 98.7402% ( 3) 00:18:04.831 12.895 - 12.990: 98.7482% ( 1) 00:18:04.831 13.179 - 13.274: 98.7562% ( 1) 00:18:04.831 13.274 - 13.369: 98.7801% ( 3) 00:18:04.831 13.464 - 13.559: 98.8040% ( 3) 00:18:04.831 13.653 - 13.748: 98.8120% ( 1) 00:18:04.831 13.843 - 13.938: 98.8200% ( 1) 00:18:04.831 13.938 - 14.033: 98.8439% ( 3) 00:18:04.831 14.033 - 14.127: 98.8678% ( 3) 00:18:04.831 14.127 - 14.222: 98.8758% ( 1) 00:18:04.831 14.222 - 14.317: 98.8838% ( 1) 00:18:04.831 14.412 - 14.507: 98.8917% ( 1) 00:18:04.831 14.696 - 14.791: 98.9156% ( 3) 00:18:04.831 14.886 - 14.981: 98.9316% ( 2) 00:18:04.831 14.981 - 15.076: 98.9396% ( 1) 00:18:04.831 15.076 - 15.170: 98.9555% ( 2) 00:18:04.831 15.360 - 15.455: 98.9635% ( 1) 00:18:04.831 17.161 - 17.256: 98.9794% ( 2) 00:18:04.831 17.256 - 17.351: 99.0193% ( 5) 00:18:04.831 17.351 - 17.446: 99.0352% ( 2) 00:18:04.831 17.446 - 17.541: 99.0592% ( 3) 00:18:04.831 17.541 - 17.636: 99.1229% ( 8) 00:18:04.831 17.636 - 17.730: 99.1548% ( 4) 00:18:04.831 17.730 - 17.825: 99.1867% ( 4) 00:18:04.831 17.825 - 17.920: 99.2346% ( 6) 00:18:04.831 17.920 - 18.015: 99.2665% ( 4) 00:18:04.831 18.015 - 18.110: 99.2984% ( 4) 00:18:04.831 18.110 - 18.204: 99.3621% ( 8) 00:18:04.831 18.204 - 18.299: 99.4100% ( 6) 00:18:04.831 18.299 - 18.394: 99.4977% ( 11) 00:18:04.831 18.394 - 18.489: 99.5615% ( 8) 00:18:04.831 18.489 - 18.584: 99.6173% ( 7) 00:18:04.831 18.584 - 18.679: 99.6412% ( 3) 00:18:04.831 18.679 - 18.773: 99.6811% ( 5) 00:18:04.831 18.773 - 18.868: 99.7130% ( 4) 00:18:04.831 18.868 - 18.963: 99.7688% ( 7) 00:18:04.831 18.963 - 19.058: 99.7768% ( 1) 00:18:04.831 19.058 - 19.153: 99.7847% ( 1) 00:18:04.831 19.247 - 19.342: 99.7927% ( 1) 00:18:04.831 19.437 - 19.532: 99.8166% ( 3) 00:18:04.831 19.532 - 19.627: 99.8405% ( 3) 00:18:04.831 20.196 - 20.290: 99.8485% ( 1) 00:18:04.831 22.850 - 22.945: 99.8565% ( 1) 00:18:04.831 23.040 - 23.135: 99.8645% ( 1) 00:18:04.831 23.988 - 24.083: 99.8804% ( 2) 00:18:04.831 24.841 - 25.031: 99.8884% ( 1) 00:18:04.831 25.600 - 25.790: 99.8963% ( 1) 00:18:04.831 25.979 - 26.169: 99.9043% ( 1) 00:18:04.831 26.738 - 26.927: 99.9123% ( 1) 00:18:04.831 28.824 - 29.013: 99.9203% ( 1) 00:18:04.831 30.910 - 31.099: 99.9282% ( 1) 00:18:04.831 3665.161 - 3689.434: 99.9362% ( 1) 00:18:04.831 3980.705 - 4004.978: 99.9761% ( 5) 00:18:04.831 4004.978 - 4029.250: 100.0000% ( 3) 00:18:04.831 00:18:04.831 Complete histogram 00:18:04.831 ================== 00:18:04.831 Range in us Cumulative Count 00:18:04.831 2.074 - 2.086: 0.6777% ( 85) 00:18:04.831 2.086 - 2.098: 27.6670% ( 3385) 00:18:04.831 2.098 - 2.110: 43.0553% ( 1930) 00:18:04.831 2.110 - 2.121: 45.6705% ( 328) 00:18:04.831 2.121 - 2.133: 53.1096% ( 933) 00:18:04.831 2.133 - 2.145: 55.5653% ( 308) 00:18:04.831 2.145 - 2.157: 58.8582% ( 413) 00:18:04.831 2.157 - 2.169: 70.3157% ( 1437) 00:18:04.831 2.169 - 2.181: 73.3695% ( 383) 00:18:04.831 2.181 - 2.193: 74.7409% ( 172) 00:18:04.831 2.193 - 2.204: 77.5714% ( 355) 00:18:04.831 2.204 - 2.216: 78.3288% ( 95) 00:18:04.831 2.216 - 2.228: 79.4929% ( 146) 00:18:04.831 2.228 - 2.240: 85.3054% ( 729) 00:18:04.832 2.240 - 2.252: 88.7498% ( 432) 00:18:04.832 2.252 - 2.264: 90.3365% ( 199) 00:18:04.832 2.264 - 2.276: 91.8514% ( 190) 00:18:04.832 2.276 - 2.287: 92.5211% ( 84) 00:18:04.832 2.287 - 2.299: 92.9118% ( 49) 00:18:04.832 2.299 - 2.311: 93.3743% ( 58) 00:18:04.832 2.311 - 2.323: 94.4347% ( 133) 00:18:04.832 2.323 - 2.335: 95.0247% ( 74) 00:18:04.832 2.335 - 2.347: 95.1284% ( 13) 00:18:04.832 2.347 - 2.359: 95.1682% ( 5) 00:18:04.832 2.359 - 2.370: 95.2161% ( 6) 00:18:04.832 2.370 - 2.382: 95.3038% ( 11) 00:18:04.832 2.382 - 2.394: 95.5270% ( 28) 00:18:04.832 2.394 - 2.406: 96.0293% ( 63) 00:18:04.832 2.406 - 2.418: 96.4838% ( 57) 00:18:04.832 2.418 - 2.430: 96.6991% ( 27) 00:18:04.832 2.430 - 2.441: 96.7788% ( 10) 00:18:04.832 2.441 - 2.453: 96.9303% ( 19) 00:18:04.832 2.453 - 2.465: 97.1217% ( 24) 00:18:04.832 2.465 - 2.477: 97.2811% ( 20) 00:18:04.832 2.477 - 2.489: 97.4805% ( 25) 00:18:04.832 2.489 - 2.501: 97.6001% ( 15) 00:18:04.832 2.501 - 2.513: 97.6957% ( 12) 00:18:04.832 2.513 - 2.524: 97.7994% ( 13) 00:18:04.832 2.524 - 2.536: 97.9110% ( 14) 00:18:04.832 2.536 - 2.548: 97.9828% ( 9) 00:18:04.832 2.548 - 2.560: 98.0306% ( 6) 00:18:04.832 2.560 - 2.572: 98.1183% ( 11) 00:18:04.832 2.572 - 2.584: 98.1741% ( 7) 00:18:04.832 2.584 - 2.596: 98.1901% ( 2) 00:18:04.832 2.596 - 2.607: 98.1981% ( 1) 00:18:04.832 2.607 - 2.619: 98.2140% ( 2) 00:18:04.832 2.631 - 2.643: 98.2459% ( 4) 00:18:04.832 2.643 - 2.655: 98.2539% ( 1) 00:18:04.832 2.655 - 2.667: 98.2778% ( 3) 00:18:04.832 2.667 - 2.679: 98.3017% ( 3) 00:18:04.832 2.679 - 2.690: 98.3177% ( 2) 00:18:04.832 2.690 - 2.702: 98.3336% ( 2) 00:18:04.832 2.702 - 2.714: 98.3416% ( 1) 00:18:04.832 2.714 - 2.726: 98.3495% ( 1) 00:18:04.832 2.738 - 2.750: 98.3575% ( 1) 00:18:04.832 2.750 - 2.761: 98.3655% ( 1) 00:18:04.832 2.761 - 2.773: 98.3735% ( 1) 00:18:04.832 2.773 - 2.785: 98.3814% ( 1) 00:18:04.832 2.951 - 2.963: 98.3894% ( 1) 00:18:04.832 3.058 - 3.081: 98.3974% ( 1) 00:18:04.832 3.271 - 3.295: 98.4054% ( 1) 00:18:04.832 3.366 - 3.390: 98.4133% ( 1) 00:18:04.832 3.413 - 3.437: 98.4293% ( 2) 00:18:04.832 3.437 - 3.461: 98.4373% ( 1) 00:18:04.832 3.461 - 3.484: 98.4452% ( 1) 00:18:04.832 3.508 - 3.532: 98.4612% ( 2) 00:18:04.832 3.532 - 3.556: 98.4851% ( 3) 00:18:04.832 3.556 - 3.579: 98.4931% ( 1) 00:18:04.832 3.579 - 3.603: 98.5010% ( 1) 00:18:04.832 3.674 - 3.698: 98.5170% ( 2) 00:18:04.832 3.698 - 3.721: 98.5329% ( 2) 00:18:04.832 3.721 - 3.745: 98.5409% ( 1) 00:18:04.832 3.793 - 3.816: 98.5568% ( 2) 00:18:04.832 3.816 - 3.840: 98.5648% ( 1) 00:18:04.832 3.864 - 3.887: 98.5728% ( 1) 00:18:04.832 3.911 - 3.935: 98.5887% ( 2) 00:18:04.832 3.982 - 4.006: 98.6047% ( 2) 00:18:04.832 4.077 - 4.101: 98.6206% ( 2) 00:18:04.832 4.101 - 4.124: 98.6286% ( 1) 00:18:04.832 4.172 - 4.196: 98.6366% ( 1) 00:18:04.832 5.902 - 5.926: 98.6446% ( 1) 00:18:04.832 5.973 - 5.997: 98.6525% ( 1) 00:18:04.832 6.068 - 6.116: 98.6605% ( 1) 00:18:04.832 6.116 - 6.163: 98.6685% ( 1) 00:18:04.832 6.305 - 6.353: 98.6764% ( 1) 00:18:04.832 6.353 - 6.400: 98.6844% ( 1) 00:18:04.832 6.637 - 6.684: 98.6924% ( 1) 00:18:04.832 6.779 - 6.827: 98.7004% ( 1) 00:18:04.832 6.827 - 6.874: 98.7083% ( 1) 00:18:04.832 7.301 - 7.348: 98.7163% ( 1) 00:18:04.832 7.585 - 7.633: 98.7243% ( 1) 00:18:04.832 7.822 - 7.870: 98.7323% ( 1) 00:18:04.832 7.917 - 7.964: 98.7402% ( 1) 00:18:04.832 7.964 - 8.012: 98.7482% ( 1) 00:18:04.832 8.391 - 8.439: 98.7562% ( 1) 00:18:04.832 8.628 - 8.676: 98.7642% ( 1) 00:18:04.832 8.865 - 8.913: 98.7721% ( 1) 00:18:04.832 9.197 - 9.244: 98.7801% ( 1) 00:18:04.832 9.244 - 9.292: 98.7881% ( 1) 00:18:04.832 9.529 - 9.576: 98.7960% ( 1) 00:18:04.832 11.852 - 11.899: 98.8040% ( 1) 00:18:04.832 13.843 - 13.938: 98.8120% ( 1) 00:18:04.832 15.644 - 15.739: 98.8279% ( 2) 00:18:04.832 15.739 - 15.834: 98.8519% ( 3) 00:18:04.832 15.834 - 15.929: 98.8758% ( 3) 00:18:04.832 15.929 - 16.024: 98.8917% ( 2) 00:18:04.832 16.024 - 16.119: 98.9236% ( 4) 00:18:04.832 16.119 - 16.213: 98.9396% ( 2) 00:18:04.832 16.213 - 16.308: 98.9555% ( 2) 00:18:04.832 16.308 - 16.403: 98.9794% ( 3) 00:18:04.832 16.403 - 16.498: 99.0033% ( 3) 00:18:04.832 16.498 - 16.593: 99.0432% ( 5) 00:18:04.832 16.593 - 16.687: 99.1229% ( 10) 00:18:04.832 16.687 - 16.782: 99.1389% ( 2) 00:18:04.832 16.782 - 16.877: 99.1947% ( 7) 00:18:04.832 16.877 - 16.972: 99.2107% ( 2) 00:18:04.832 16.972 - 17.067: 99.2186% ( 1) 00:18:04.832 17.067 - 17.161: 99.2505% ( 4) 00:18:04.832 17.351 - 17.446: 99.2744% ( 3) 00:18:04.832 17.446 - 17.541: 99.2904% ( 2) 00:18:04.832 17.730 - 17.825: 99.2984% ( 1) 00:18:04.832 17.825 - 17.920: 99.3063% ( 1) 00:18:04.832 17.920 - 18.015: 99.3303% ( 3) 00:18:04.832 18.015 - 18.110: 99.3382% ( 1) 00:18:04.832 18.110 - 18.204: 99.3462% ( 1) 00:18:04.832 18.299 - 18.394: 99.3542% ( 1) 00:18:04.832 18.394 - 18.489: 99.3621% ( 1) 00:18:04.832 18.489 - 18.584: 99.3701% ( 1) 00:18:04.832 20.954 - 21.049: 99.3781% ( 1) 00:18:04.832 32.616 - 32.806: 99.3861% ( 1) 00:18:04.832 148.670 - 149.428: 99.3940% ( 1) 00:18:04.832 2524.350 - 2536.486: 99.4020% ( 1) 00:18:04.832 3373.890 - 3398.163: 99.4100% ( 1) 00:18:04.832 3980.705 - 4004.978: 99.8565%[2024-11-17 18:38:50.989506] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:04.832 ( 56) 00:18:04.832 4004.978 - 4029.250: 99.9920% ( 17) 00:18:04.832 4975.881 - 5000.154: 100.0000% ( 1) 00:18:04.832 00:18:04.832 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:04.832 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:04.832 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:04.832 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:04.832 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:04.832 [ 00:18:04.832 { 00:18:04.832 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:04.832 "subtype": "Discovery", 00:18:04.832 "listen_addresses": [], 00:18:04.832 "allow_any_host": true, 00:18:04.832 "hosts": [] 00:18:04.832 }, 00:18:04.832 { 00:18:04.832 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:04.832 "subtype": "NVMe", 00:18:04.832 "listen_addresses": [ 00:18:04.832 { 00:18:04.832 "trtype": "VFIOUSER", 00:18:04.832 "adrfam": "IPv4", 00:18:04.832 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:04.832 "trsvcid": "0" 00:18:04.832 } 00:18:04.832 ], 00:18:04.832 "allow_any_host": true, 00:18:04.832 "hosts": [], 00:18:04.832 "serial_number": "SPDK1", 00:18:04.832 "model_number": "SPDK bdev Controller", 00:18:04.832 "max_namespaces": 32, 00:18:04.832 "min_cntlid": 1, 00:18:04.832 "max_cntlid": 65519, 00:18:04.832 "namespaces": [ 00:18:04.832 { 00:18:04.832 "nsid": 1, 00:18:04.832 "bdev_name": "Malloc1", 00:18:04.832 "name": "Malloc1", 00:18:04.832 "nguid": "8DFD2ECCCFC1488C9C69894E4E29EE35", 00:18:04.832 "uuid": "8dfd2ecc-cfc1-488c-9c69-894e4e29ee35" 00:18:04.832 } 00:18:04.832 ] 00:18:04.832 }, 00:18:04.832 { 00:18:04.832 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:04.832 "subtype": "NVMe", 00:18:04.832 "listen_addresses": [ 00:18:04.832 { 00:18:04.832 "trtype": "VFIOUSER", 00:18:04.832 "adrfam": "IPv4", 00:18:04.832 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:04.832 "trsvcid": "0" 00:18:04.832 } 00:18:04.832 ], 00:18:04.832 "allow_any_host": true, 00:18:04.832 "hosts": [], 00:18:04.832 "serial_number": "SPDK2", 00:18:04.832 "model_number": "SPDK bdev Controller", 00:18:04.832 "max_namespaces": 32, 00:18:04.832 "min_cntlid": 1, 00:18:04.832 "max_cntlid": 65519, 00:18:04.832 "namespaces": [ 00:18:04.832 { 00:18:04.832 "nsid": 1, 00:18:04.832 "bdev_name": "Malloc2", 00:18:04.832 "name": "Malloc2", 00:18:04.832 "nguid": "98C3A36319FB4D8983DB97C6140BBA7E", 00:18:04.832 "uuid": "98c3a363-19fb-4d89-83db-97c6140bba7e" 00:18:04.832 } 00:18:04.832 ] 00:18:04.832 } 00:18:04.832 ] 00:18:04.832 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:04.832 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=721704 00:18:04.833 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:04.833 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:04.833 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:04.833 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:04.833 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:04.833 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:04.833 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:04.833 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:05.090 [2024-11-17 18:38:51.475189] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:05.090 Malloc3 00:18:05.090 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:05.348 [2024-11-17 18:38:51.867987] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:05.348 18:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:05.348 Asynchronous Event Request test 00:18:05.348 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:05.348 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:05.348 Registering asynchronous event callbacks... 00:18:05.348 Starting namespace attribute notice tests for all controllers... 00:18:05.348 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:05.348 aer_cb - Changed Namespace 00:18:05.348 Cleaning up... 00:18:05.606 [ 00:18:05.606 { 00:18:05.606 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:05.606 "subtype": "Discovery", 00:18:05.606 "listen_addresses": [], 00:18:05.606 "allow_any_host": true, 00:18:05.606 "hosts": [] 00:18:05.606 }, 00:18:05.606 { 00:18:05.606 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:05.606 "subtype": "NVMe", 00:18:05.606 "listen_addresses": [ 00:18:05.606 { 00:18:05.606 "trtype": "VFIOUSER", 00:18:05.606 "adrfam": "IPv4", 00:18:05.606 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:05.606 "trsvcid": "0" 00:18:05.606 } 00:18:05.606 ], 00:18:05.606 "allow_any_host": true, 00:18:05.606 "hosts": [], 00:18:05.606 "serial_number": "SPDK1", 00:18:05.606 "model_number": "SPDK bdev Controller", 00:18:05.606 "max_namespaces": 32, 00:18:05.606 "min_cntlid": 1, 00:18:05.606 "max_cntlid": 65519, 00:18:05.606 "namespaces": [ 00:18:05.606 { 00:18:05.606 "nsid": 1, 00:18:05.606 "bdev_name": "Malloc1", 00:18:05.606 "name": "Malloc1", 00:18:05.606 "nguid": "8DFD2ECCCFC1488C9C69894E4E29EE35", 00:18:05.606 "uuid": "8dfd2ecc-cfc1-488c-9c69-894e4e29ee35" 00:18:05.606 }, 00:18:05.606 { 00:18:05.606 "nsid": 2, 00:18:05.606 "bdev_name": "Malloc3", 00:18:05.606 "name": "Malloc3", 00:18:05.606 "nguid": "AC32ABB148AA4F17A716654EC7591E49", 00:18:05.606 "uuid": "ac32abb1-48aa-4f17-a716-654ec7591e49" 00:18:05.606 } 00:18:05.606 ] 00:18:05.606 }, 00:18:05.606 { 00:18:05.606 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:05.606 "subtype": "NVMe", 00:18:05.606 "listen_addresses": [ 00:18:05.606 { 00:18:05.606 "trtype": "VFIOUSER", 00:18:05.606 "adrfam": "IPv4", 00:18:05.606 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:05.606 "trsvcid": "0" 00:18:05.606 } 00:18:05.606 ], 00:18:05.606 "allow_any_host": true, 00:18:05.606 "hosts": [], 00:18:05.606 "serial_number": "SPDK2", 00:18:05.606 "model_number": "SPDK bdev Controller", 00:18:05.606 "max_namespaces": 32, 00:18:05.606 "min_cntlid": 1, 00:18:05.606 "max_cntlid": 65519, 00:18:05.606 "namespaces": [ 00:18:05.606 { 00:18:05.606 "nsid": 1, 00:18:05.606 "bdev_name": "Malloc2", 00:18:05.606 "name": "Malloc2", 00:18:05.606 "nguid": "98C3A36319FB4D8983DB97C6140BBA7E", 00:18:05.606 "uuid": "98c3a363-19fb-4d89-83db-97c6140bba7e" 00:18:05.606 } 00:18:05.606 ] 00:18:05.606 } 00:18:05.606 ] 00:18:05.606 18:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 721704 00:18:05.606 18:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:05.606 18:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:05.606 18:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:05.606 18:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:05.866 [2024-11-17 18:38:52.184496] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:05.866 [2024-11-17 18:38:52.184540] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid721836 ] 00:18:05.866 [2024-11-17 18:38:52.235508] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:05.866 [2024-11-17 18:38:52.243969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:05.866 [2024-11-17 18:38:52.244014] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7c553c6000 00:18:05.866 [2024-11-17 18:38:52.244969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:05.866 [2024-11-17 18:38:52.245987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:05.866 [2024-11-17 18:38:52.246996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:05.866 [2024-11-17 18:38:52.248002] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:05.866 [2024-11-17 18:38:52.249011] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:05.866 [2024-11-17 18:38:52.250021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:05.866 [2024-11-17 18:38:52.251024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:05.866 [2024-11-17 18:38:52.252033] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:05.866 [2024-11-17 18:38:52.253041] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:05.866 [2024-11-17 18:38:52.253064] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7c540be000 00:18:05.866 [2024-11-17 18:38:52.254180] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:05.866 [2024-11-17 18:38:52.268900] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:05.866 [2024-11-17 18:38:52.268936] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:05.866 [2024-11-17 18:38:52.271052] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:05.866 [2024-11-17 18:38:52.271105] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:05.867 [2024-11-17 18:38:52.271188] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:05.867 [2024-11-17 18:38:52.271213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:05.867 [2024-11-17 18:38:52.271224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:05.867 [2024-11-17 18:38:52.272070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:05.867 [2024-11-17 18:38:52.272091] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:05.867 [2024-11-17 18:38:52.272104] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:05.867 [2024-11-17 18:38:52.273063] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:05.867 [2024-11-17 18:38:52.273088] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:05.867 [2024-11-17 18:38:52.273103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:05.867 [2024-11-17 18:38:52.274071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:05.867 [2024-11-17 18:38:52.274092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:05.867 [2024-11-17 18:38:52.275077] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:05.867 [2024-11-17 18:38:52.275096] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:05.867 [2024-11-17 18:38:52.275105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:05.867 [2024-11-17 18:38:52.275116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:05.867 [2024-11-17 18:38:52.275226] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:05.867 [2024-11-17 18:38:52.275234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:05.867 [2024-11-17 18:38:52.275242] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:05.867 [2024-11-17 18:38:52.276090] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:05.867 [2024-11-17 18:38:52.277098] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:05.867 [2024-11-17 18:38:52.278102] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:05.867 [2024-11-17 18:38:52.279097] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:05.867 [2024-11-17 18:38:52.279181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:05.867 [2024-11-17 18:38:52.280115] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:05.867 [2024-11-17 18:38:52.280134] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:05.867 [2024-11-17 18:38:52.280144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.280167] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:05.867 [2024-11-17 18:38:52.280180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.280202] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:05.867 [2024-11-17 18:38:52.280211] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:05.867 [2024-11-17 18:38:52.280217] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:05.867 [2024-11-17 18:38:52.280235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:05.867 [2024-11-17 18:38:52.290688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:05.867 [2024-11-17 18:38:52.290711] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:05.867 [2024-11-17 18:38:52.290720] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:05.867 [2024-11-17 18:38:52.290727] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:05.867 [2024-11-17 18:38:52.290735] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:05.867 [2024-11-17 18:38:52.290747] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:05.867 [2024-11-17 18:38:52.290756] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:05.867 [2024-11-17 18:38:52.290764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.290780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.290796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:05.867 [2024-11-17 18:38:52.298686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:05.867 [2024-11-17 18:38:52.298709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.867 [2024-11-17 18:38:52.298724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.867 [2024-11-17 18:38:52.298736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.867 [2024-11-17 18:38:52.298748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.867 [2024-11-17 18:38:52.298757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.298769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.298783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:05.867 [2024-11-17 18:38:52.306702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:05.867 [2024-11-17 18:38:52.306725] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:05.867 [2024-11-17 18:38:52.306736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.306748] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.306757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.306771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:05.867 [2024-11-17 18:38:52.314685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:05.867 [2024-11-17 18:38:52.314763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.314781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.314794] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:05.867 [2024-11-17 18:38:52.314803] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:05.867 [2024-11-17 18:38:52.314809] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:05.867 [2024-11-17 18:38:52.314818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:05.867 [2024-11-17 18:38:52.322687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:05.867 [2024-11-17 18:38:52.322715] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:05.867 [2024-11-17 18:38:52.322732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.322747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.322760] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:05.867 [2024-11-17 18:38:52.322769] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:05.867 [2024-11-17 18:38:52.322774] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:05.867 [2024-11-17 18:38:52.322784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:05.867 [2024-11-17 18:38:52.330687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:05.867 [2024-11-17 18:38:52.330715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.330733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:05.867 [2024-11-17 18:38:52.330746] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:05.867 [2024-11-17 18:38:52.330754] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:05.867 [2024-11-17 18:38:52.330760] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:05.867 [2024-11-17 18:38:52.330770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:05.867 [2024-11-17 18:38:52.338686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:05.867 [2024-11-17 18:38:52.338716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:05.868 [2024-11-17 18:38:52.338730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:05.868 [2024-11-17 18:38:52.338747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:05.868 [2024-11-17 18:38:52.338758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:05.868 [2024-11-17 18:38:52.338770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:05.868 [2024-11-17 18:38:52.338779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:05.868 [2024-11-17 18:38:52.338787] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:05.868 [2024-11-17 18:38:52.338795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:05.868 [2024-11-17 18:38:52.338803] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:05.868 [2024-11-17 18:38:52.338827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:05.868 [2024-11-17 18:38:52.346688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:05.868 [2024-11-17 18:38:52.346714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:05.868 [2024-11-17 18:38:52.354686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:05.868 [2024-11-17 18:38:52.354712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:05.868 [2024-11-17 18:38:52.362688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:05.868 [2024-11-17 18:38:52.362713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:05.868 [2024-11-17 18:38:52.370688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:05.868 [2024-11-17 18:38:52.370718] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:05.868 [2024-11-17 18:38:52.370729] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:05.868 [2024-11-17 18:38:52.370735] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:05.868 [2024-11-17 18:38:52.370741] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:05.868 [2024-11-17 18:38:52.370747] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:05.868 [2024-11-17 18:38:52.370756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:05.868 [2024-11-17 18:38:52.370768] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:05.868 [2024-11-17 18:38:52.370776] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:05.868 [2024-11-17 18:38:52.370782] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:05.868 [2024-11-17 18:38:52.370791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:05.868 [2024-11-17 18:38:52.370802] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:05.868 [2024-11-17 18:38:52.370810] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:05.868 [2024-11-17 18:38:52.370816] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:05.868 [2024-11-17 18:38:52.370824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:05.868 [2024-11-17 18:38:52.370840] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:05.868 [2024-11-17 18:38:52.370849] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:05.868 [2024-11-17 18:38:52.370854] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:05.868 [2024-11-17 18:38:52.370863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:05.868 [2024-11-17 18:38:52.378704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:05.868 [2024-11-17 18:38:52.378735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:05.868 [2024-11-17 18:38:52.378754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:05.868 [2024-11-17 18:38:52.378767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:05.868 ===================================================== 00:18:05.868 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:05.868 ===================================================== 00:18:05.868 Controller Capabilities/Features 00:18:05.868 ================================ 00:18:05.868 Vendor ID: 4e58 00:18:05.868 Subsystem Vendor ID: 4e58 00:18:05.868 Serial Number: SPDK2 00:18:05.868 Model Number: SPDK bdev Controller 00:18:05.868 Firmware Version: 25.01 00:18:05.868 Recommended Arb Burst: 6 00:18:05.868 IEEE OUI Identifier: 8d 6b 50 00:18:05.868 Multi-path I/O 00:18:05.868 May have multiple subsystem ports: Yes 00:18:05.868 May have multiple controllers: Yes 00:18:05.868 Associated with SR-IOV VF: No 00:18:05.868 Max Data Transfer Size: 131072 00:18:05.868 Max Number of Namespaces: 32 00:18:05.868 Max Number of I/O Queues: 127 00:18:05.868 NVMe Specification Version (VS): 1.3 00:18:05.868 NVMe Specification Version (Identify): 1.3 00:18:05.868 Maximum Queue Entries: 256 00:18:05.868 Contiguous Queues Required: Yes 00:18:05.868 Arbitration Mechanisms Supported 00:18:05.868 Weighted Round Robin: Not Supported 00:18:05.868 Vendor Specific: Not Supported 00:18:05.868 Reset Timeout: 15000 ms 00:18:05.868 Doorbell Stride: 4 bytes 00:18:05.868 NVM Subsystem Reset: Not Supported 00:18:05.868 Command Sets Supported 00:18:05.868 NVM Command Set: Supported 00:18:05.868 Boot Partition: Not Supported 00:18:05.868 Memory Page Size Minimum: 4096 bytes 00:18:05.868 Memory Page Size Maximum: 4096 bytes 00:18:05.868 Persistent Memory Region: Not Supported 00:18:05.868 Optional Asynchronous Events Supported 00:18:05.868 Namespace Attribute Notices: Supported 00:18:05.868 Firmware Activation Notices: Not Supported 00:18:05.868 ANA Change Notices: Not Supported 00:18:05.868 PLE Aggregate Log Change Notices: Not Supported 00:18:05.868 LBA Status Info Alert Notices: Not Supported 00:18:05.868 EGE Aggregate Log Change Notices: Not Supported 00:18:05.868 Normal NVM Subsystem Shutdown event: Not Supported 00:18:05.868 Zone Descriptor Change Notices: Not Supported 00:18:05.868 Discovery Log Change Notices: Not Supported 00:18:05.868 Controller Attributes 00:18:05.868 128-bit Host Identifier: Supported 00:18:05.868 Non-Operational Permissive Mode: Not Supported 00:18:05.868 NVM Sets: Not Supported 00:18:05.868 Read Recovery Levels: Not Supported 00:18:05.868 Endurance Groups: Not Supported 00:18:05.868 Predictable Latency Mode: Not Supported 00:18:05.868 Traffic Based Keep ALive: Not Supported 00:18:05.868 Namespace Granularity: Not Supported 00:18:05.868 SQ Associations: Not Supported 00:18:05.868 UUID List: Not Supported 00:18:05.868 Multi-Domain Subsystem: Not Supported 00:18:05.868 Fixed Capacity Management: Not Supported 00:18:05.868 Variable Capacity Management: Not Supported 00:18:05.868 Delete Endurance Group: Not Supported 00:18:05.868 Delete NVM Set: Not Supported 00:18:05.868 Extended LBA Formats Supported: Not Supported 00:18:05.868 Flexible Data Placement Supported: Not Supported 00:18:05.868 00:18:05.868 Controller Memory Buffer Support 00:18:05.868 ================================ 00:18:05.868 Supported: No 00:18:05.868 00:18:05.868 Persistent Memory Region Support 00:18:05.868 ================================ 00:18:05.868 Supported: No 00:18:05.868 00:18:05.868 Admin Command Set Attributes 00:18:05.868 ============================ 00:18:05.868 Security Send/Receive: Not Supported 00:18:05.868 Format NVM: Not Supported 00:18:05.868 Firmware Activate/Download: Not Supported 00:18:05.868 Namespace Management: Not Supported 00:18:05.868 Device Self-Test: Not Supported 00:18:05.868 Directives: Not Supported 00:18:05.868 NVMe-MI: Not Supported 00:18:05.868 Virtualization Management: Not Supported 00:18:05.868 Doorbell Buffer Config: Not Supported 00:18:05.868 Get LBA Status Capability: Not Supported 00:18:05.868 Command & Feature Lockdown Capability: Not Supported 00:18:05.868 Abort Command Limit: 4 00:18:05.868 Async Event Request Limit: 4 00:18:05.868 Number of Firmware Slots: N/A 00:18:05.868 Firmware Slot 1 Read-Only: N/A 00:18:05.868 Firmware Activation Without Reset: N/A 00:18:05.868 Multiple Update Detection Support: N/A 00:18:05.868 Firmware Update Granularity: No Information Provided 00:18:05.868 Per-Namespace SMART Log: No 00:18:05.868 Asymmetric Namespace Access Log Page: Not Supported 00:18:05.868 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:05.868 Command Effects Log Page: Supported 00:18:05.868 Get Log Page Extended Data: Supported 00:18:05.868 Telemetry Log Pages: Not Supported 00:18:05.868 Persistent Event Log Pages: Not Supported 00:18:05.868 Supported Log Pages Log Page: May Support 00:18:05.868 Commands Supported & Effects Log Page: Not Supported 00:18:05.868 Feature Identifiers & Effects Log Page:May Support 00:18:05.869 NVMe-MI Commands & Effects Log Page: May Support 00:18:05.869 Data Area 4 for Telemetry Log: Not Supported 00:18:05.869 Error Log Page Entries Supported: 128 00:18:05.869 Keep Alive: Supported 00:18:05.869 Keep Alive Granularity: 10000 ms 00:18:05.869 00:18:05.869 NVM Command Set Attributes 00:18:05.869 ========================== 00:18:05.869 Submission Queue Entry Size 00:18:05.869 Max: 64 00:18:05.869 Min: 64 00:18:05.869 Completion Queue Entry Size 00:18:05.869 Max: 16 00:18:05.869 Min: 16 00:18:05.869 Number of Namespaces: 32 00:18:05.869 Compare Command: Supported 00:18:05.869 Write Uncorrectable Command: Not Supported 00:18:05.869 Dataset Management Command: Supported 00:18:05.869 Write Zeroes Command: Supported 00:18:05.869 Set Features Save Field: Not Supported 00:18:05.869 Reservations: Not Supported 00:18:05.869 Timestamp: Not Supported 00:18:05.869 Copy: Supported 00:18:05.869 Volatile Write Cache: Present 00:18:05.869 Atomic Write Unit (Normal): 1 00:18:05.869 Atomic Write Unit (PFail): 1 00:18:05.869 Atomic Compare & Write Unit: 1 00:18:05.869 Fused Compare & Write: Supported 00:18:05.869 Scatter-Gather List 00:18:05.869 SGL Command Set: Supported (Dword aligned) 00:18:05.869 SGL Keyed: Not Supported 00:18:05.869 SGL Bit Bucket Descriptor: Not Supported 00:18:05.869 SGL Metadata Pointer: Not Supported 00:18:05.869 Oversized SGL: Not Supported 00:18:05.869 SGL Metadata Address: Not Supported 00:18:05.869 SGL Offset: Not Supported 00:18:05.869 Transport SGL Data Block: Not Supported 00:18:05.869 Replay Protected Memory Block: Not Supported 00:18:05.869 00:18:05.869 Firmware Slot Information 00:18:05.869 ========================= 00:18:05.869 Active slot: 1 00:18:05.869 Slot 1 Firmware Revision: 25.01 00:18:05.869 00:18:05.869 00:18:05.869 Commands Supported and Effects 00:18:05.869 ============================== 00:18:05.869 Admin Commands 00:18:05.869 -------------- 00:18:05.869 Get Log Page (02h): Supported 00:18:05.869 Identify (06h): Supported 00:18:05.869 Abort (08h): Supported 00:18:05.869 Set Features (09h): Supported 00:18:05.869 Get Features (0Ah): Supported 00:18:05.869 Asynchronous Event Request (0Ch): Supported 00:18:05.869 Keep Alive (18h): Supported 00:18:05.869 I/O Commands 00:18:05.869 ------------ 00:18:05.869 Flush (00h): Supported LBA-Change 00:18:05.869 Write (01h): Supported LBA-Change 00:18:05.869 Read (02h): Supported 00:18:05.869 Compare (05h): Supported 00:18:05.869 Write Zeroes (08h): Supported LBA-Change 00:18:05.869 Dataset Management (09h): Supported LBA-Change 00:18:05.869 Copy (19h): Supported LBA-Change 00:18:05.869 00:18:05.869 Error Log 00:18:05.869 ========= 00:18:05.869 00:18:05.869 Arbitration 00:18:05.869 =========== 00:18:05.869 Arbitration Burst: 1 00:18:05.869 00:18:05.869 Power Management 00:18:05.869 ================ 00:18:05.869 Number of Power States: 1 00:18:05.869 Current Power State: Power State #0 00:18:05.869 Power State #0: 00:18:05.869 Max Power: 0.00 W 00:18:05.869 Non-Operational State: Operational 00:18:05.869 Entry Latency: Not Reported 00:18:05.869 Exit Latency: Not Reported 00:18:05.869 Relative Read Throughput: 0 00:18:05.869 Relative Read Latency: 0 00:18:05.869 Relative Write Throughput: 0 00:18:05.869 Relative Write Latency: 0 00:18:05.869 Idle Power: Not Reported 00:18:05.869 Active Power: Not Reported 00:18:05.869 Non-Operational Permissive Mode: Not Supported 00:18:05.869 00:18:05.869 Health Information 00:18:05.869 ================== 00:18:05.869 Critical Warnings: 00:18:05.869 Available Spare Space: OK 00:18:05.869 Temperature: OK 00:18:05.869 Device Reliability: OK 00:18:05.869 Read Only: No 00:18:05.869 Volatile Memory Backup: OK 00:18:05.869 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:05.869 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:05.869 Available Spare: 0% 00:18:05.869 Available Sp[2024-11-17 18:38:52.378892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:05.869 [2024-11-17 18:38:52.386686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:05.869 [2024-11-17 18:38:52.386738] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:05.869 [2024-11-17 18:38:52.386757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.869 [2024-11-17 18:38:52.386768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.869 [2024-11-17 18:38:52.386778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.869 [2024-11-17 18:38:52.386788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.869 [2024-11-17 18:38:52.386853] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:05.869 [2024-11-17 18:38:52.386874] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:05.869 [2024-11-17 18:38:52.387859] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:05.869 [2024-11-17 18:38:52.387948] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:05.869 [2024-11-17 18:38:52.387963] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:05.869 [2024-11-17 18:38:52.388863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:05.869 [2024-11-17 18:38:52.388887] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:05.869 [2024-11-17 18:38:52.388939] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:05.869 [2024-11-17 18:38:52.390154] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:05.869 are Threshold: 0% 00:18:05.869 Life Percentage Used: 0% 00:18:05.869 Data Units Read: 0 00:18:05.869 Data Units Written: 0 00:18:05.869 Host Read Commands: 0 00:18:05.869 Host Write Commands: 0 00:18:05.869 Controller Busy Time: 0 minutes 00:18:05.869 Power Cycles: 0 00:18:05.869 Power On Hours: 0 hours 00:18:05.869 Unsafe Shutdowns: 0 00:18:05.869 Unrecoverable Media Errors: 0 00:18:05.869 Lifetime Error Log Entries: 0 00:18:05.869 Warning Temperature Time: 0 minutes 00:18:05.869 Critical Temperature Time: 0 minutes 00:18:05.869 00:18:05.869 Number of Queues 00:18:05.869 ================ 00:18:05.869 Number of I/O Submission Queues: 127 00:18:05.869 Number of I/O Completion Queues: 127 00:18:05.869 00:18:05.869 Active Namespaces 00:18:05.869 ================= 00:18:05.869 Namespace ID:1 00:18:05.869 Error Recovery Timeout: Unlimited 00:18:05.869 Command Set Identifier: NVM (00h) 00:18:05.869 Deallocate: Supported 00:18:05.869 Deallocated/Unwritten Error: Not Supported 00:18:05.869 Deallocated Read Value: Unknown 00:18:05.869 Deallocate in Write Zeroes: Not Supported 00:18:05.869 Deallocated Guard Field: 0xFFFF 00:18:05.869 Flush: Supported 00:18:05.869 Reservation: Supported 00:18:05.869 Namespace Sharing Capabilities: Multiple Controllers 00:18:05.869 Size (in LBAs): 131072 (0GiB) 00:18:05.869 Capacity (in LBAs): 131072 (0GiB) 00:18:05.869 Utilization (in LBAs): 131072 (0GiB) 00:18:05.869 NGUID: 98C3A36319FB4D8983DB97C6140BBA7E 00:18:05.869 UUID: 98c3a363-19fb-4d89-83db-97c6140bba7e 00:18:05.869 Thin Provisioning: Not Supported 00:18:05.869 Per-NS Atomic Units: Yes 00:18:05.869 Atomic Boundary Size (Normal): 0 00:18:05.869 Atomic Boundary Size (PFail): 0 00:18:05.869 Atomic Boundary Offset: 0 00:18:05.869 Maximum Single Source Range Length: 65535 00:18:05.869 Maximum Copy Length: 65535 00:18:05.869 Maximum Source Range Count: 1 00:18:05.869 NGUID/EUI64 Never Reused: No 00:18:05.869 Namespace Write Protected: No 00:18:05.869 Number of LBA Formats: 1 00:18:05.869 Current LBA Format: LBA Format #00 00:18:05.869 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:05.869 00:18:05.869 18:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:06.127 [2024-11-17 18:38:52.626486] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:11.447 Initializing NVMe Controllers 00:18:11.447 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:11.447 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:11.447 Initialization complete. Launching workers. 00:18:11.447 ======================================================== 00:18:11.447 Latency(us) 00:18:11.447 Device Information : IOPS MiB/s Average min max 00:18:11.447 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33551.80 131.06 3816.00 1167.46 10718.86 00:18:11.447 ======================================================== 00:18:11.447 Total : 33551.80 131.06 3816.00 1167.46 10718.86 00:18:11.447 00:18:11.447 [2024-11-17 18:38:57.737063] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:11.447 18:38:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:11.447 [2024-11-17 18:38:58.001816] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:16.718 Initializing NVMe Controllers 00:18:16.718 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:16.718 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:16.718 Initialization complete. Launching workers. 00:18:16.718 ======================================================== 00:18:16.718 Latency(us) 00:18:16.718 Device Information : IOPS MiB/s Average min max 00:18:16.718 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30483.38 119.08 4199.16 1226.86 7600.92 00:18:16.718 ======================================================== 00:18:16.718 Total : 30483.38 119.08 4199.16 1226.86 7600.92 00:18:16.718 00:18:16.718 [2024-11-17 18:39:03.023850] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:16.718 18:39:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:16.718 [2024-11-17 18:39:03.247841] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:21.982 [2024-11-17 18:39:08.393817] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:21.982 Initializing NVMe Controllers 00:18:21.982 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:21.982 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:21.982 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:21.982 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:21.982 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:21.982 Initialization complete. Launching workers. 00:18:21.982 Starting thread on core 2 00:18:21.982 Starting thread on core 3 00:18:21.982 Starting thread on core 1 00:18:21.982 18:39:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:22.240 [2024-11-17 18:39:08.719199] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:25.521 [2024-11-17 18:39:11.792972] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:25.521 Initializing NVMe Controllers 00:18:25.521 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:25.521 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:25.521 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:25.521 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:25.521 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:25.521 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:25.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:25.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:25.521 Initialization complete. Launching workers. 00:18:25.521 Starting thread on core 1 with urgent priority queue 00:18:25.521 Starting thread on core 2 with urgent priority queue 00:18:25.521 Starting thread on core 3 with urgent priority queue 00:18:25.521 Starting thread on core 0 with urgent priority queue 00:18:25.521 SPDK bdev Controller (SPDK2 ) core 0: 6440.00 IO/s 15.53 secs/100000 ios 00:18:25.521 SPDK bdev Controller (SPDK2 ) core 1: 6078.33 IO/s 16.45 secs/100000 ios 00:18:25.521 SPDK bdev Controller (SPDK2 ) core 2: 5625.33 IO/s 17.78 secs/100000 ios 00:18:25.521 SPDK bdev Controller (SPDK2 ) core 3: 6016.00 IO/s 16.62 secs/100000 ios 00:18:25.521 ======================================================== 00:18:25.521 00:18:25.521 18:39:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:25.779 [2024-11-17 18:39:12.111702] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:25.779 Initializing NVMe Controllers 00:18:25.779 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:25.779 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:25.779 Namespace ID: 1 size: 0GB 00:18:25.779 Initialization complete. 00:18:25.779 INFO: using host memory buffer for IO 00:18:25.779 Hello world! 00:18:25.779 [2024-11-17 18:39:12.121772] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:25.779 18:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:26.037 [2024-11-17 18:39:12.443249] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:26.970 Initializing NVMe Controllers 00:18:26.970 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:26.970 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:26.970 Initialization complete. Launching workers. 00:18:26.970 submit (in ns) avg, min, max = 7443.8, 3500.0, 7992010.0 00:18:26.970 complete (in ns) avg, min, max = 27114.8, 2061.1, 8004370.0 00:18:26.970 00:18:26.970 Submit histogram 00:18:26.970 ================ 00:18:26.970 Range in us Cumulative Count 00:18:26.970 3.484 - 3.508: 0.0628% ( 8) 00:18:26.970 3.508 - 3.532: 0.9883% ( 118) 00:18:26.970 3.532 - 3.556: 2.8865% ( 242) 00:18:26.970 3.556 - 3.579: 7.4751% ( 585) 00:18:26.970 3.579 - 3.603: 14.9894% ( 958) 00:18:26.970 3.603 - 3.627: 25.3667% ( 1323) 00:18:26.970 3.627 - 3.650: 35.1322% ( 1245) 00:18:26.970 3.650 - 3.674: 43.4073% ( 1055) 00:18:26.970 3.674 - 3.698: 49.0548% ( 720) 00:18:26.970 3.698 - 3.721: 55.1259% ( 774) 00:18:26.970 3.721 - 3.745: 59.1105% ( 508) 00:18:26.970 3.745 - 3.769: 62.8285% ( 474) 00:18:26.970 3.769 - 3.793: 66.2719% ( 439) 00:18:26.970 3.793 - 3.816: 69.5035% ( 412) 00:18:26.970 3.816 - 3.840: 72.9077% ( 434) 00:18:26.970 3.840 - 3.864: 77.2217% ( 550) 00:18:26.970 3.864 - 3.887: 81.2613% ( 515) 00:18:26.970 3.887 - 3.911: 84.5086% ( 414) 00:18:26.970 3.911 - 3.935: 87.0107% ( 319) 00:18:26.970 3.935 - 3.959: 88.6109% ( 204) 00:18:26.970 3.959 - 3.982: 89.9914% ( 176) 00:18:26.970 3.982 - 4.006: 91.3248% ( 170) 00:18:26.970 4.006 - 4.030: 92.1484% ( 105) 00:18:26.970 4.030 - 4.053: 93.1681% ( 130) 00:18:26.970 4.053 - 4.077: 94.0466% ( 112) 00:18:26.970 4.077 - 4.101: 94.7447% ( 89) 00:18:26.970 4.101 - 4.124: 95.4036% ( 84) 00:18:26.970 4.124 - 4.148: 95.8271% ( 54) 00:18:26.970 4.148 - 4.172: 96.0232% ( 25) 00:18:26.970 4.172 - 4.196: 96.2193% ( 25) 00:18:26.970 4.196 - 4.219: 96.3605% ( 18) 00:18:26.970 4.219 - 4.243: 96.5095% ( 19) 00:18:26.970 4.243 - 4.267: 96.6350% ( 16) 00:18:26.970 4.267 - 4.290: 96.8076% ( 22) 00:18:26.970 4.290 - 4.314: 96.8860% ( 10) 00:18:26.970 4.314 - 4.338: 97.0115% ( 16) 00:18:26.970 4.338 - 4.361: 97.1292% ( 15) 00:18:26.970 4.361 - 4.385: 97.1998% ( 9) 00:18:26.970 4.385 - 4.409: 97.2390% ( 5) 00:18:26.970 4.409 - 4.433: 97.2939% ( 7) 00:18:26.970 4.433 - 4.456: 97.3174% ( 3) 00:18:26.970 4.480 - 4.504: 97.3331% ( 2) 00:18:26.970 4.504 - 4.527: 97.3488% ( 2) 00:18:26.970 4.527 - 4.551: 97.3567% ( 1) 00:18:26.970 4.599 - 4.622: 97.3645% ( 1) 00:18:26.970 4.622 - 4.646: 97.3723% ( 1) 00:18:26.970 4.646 - 4.670: 97.3802% ( 1) 00:18:26.970 4.670 - 4.693: 97.3880% ( 1) 00:18:26.970 4.693 - 4.717: 97.4037% ( 2) 00:18:26.970 4.717 - 4.741: 97.4272% ( 3) 00:18:26.970 4.741 - 4.764: 97.4743% ( 6) 00:18:26.970 4.764 - 4.788: 97.5057% ( 4) 00:18:26.970 4.788 - 4.812: 97.5527% ( 6) 00:18:26.970 4.812 - 4.836: 97.6077% ( 7) 00:18:26.970 4.836 - 4.859: 97.6547% ( 6) 00:18:26.970 4.859 - 4.883: 97.7645% ( 14) 00:18:26.970 4.883 - 4.907: 97.8037% ( 5) 00:18:26.970 4.907 - 4.930: 97.8508% ( 6) 00:18:26.970 4.930 - 4.954: 97.9214% ( 9) 00:18:26.970 4.954 - 4.978: 97.9449% ( 3) 00:18:26.970 4.978 - 5.001: 97.9763% ( 4) 00:18:26.970 5.001 - 5.025: 98.0312% ( 7) 00:18:26.971 5.025 - 5.049: 98.0861% ( 7) 00:18:26.971 5.049 - 5.073: 98.1332% ( 6) 00:18:26.971 5.073 - 5.096: 98.1646% ( 4) 00:18:26.971 5.096 - 5.120: 98.1724% ( 1) 00:18:26.971 5.120 - 5.144: 98.1802% ( 1) 00:18:26.971 5.144 - 5.167: 98.1881% ( 1) 00:18:26.971 5.167 - 5.191: 98.1959% ( 1) 00:18:26.971 5.215 - 5.239: 98.2038% ( 1) 00:18:26.971 5.239 - 5.262: 98.2195% ( 2) 00:18:26.971 5.310 - 5.333: 98.2352% ( 2) 00:18:26.971 5.452 - 5.476: 98.2430% ( 1) 00:18:26.971 5.641 - 5.665: 98.2508% ( 1) 00:18:26.971 5.665 - 5.689: 98.2587% ( 1) 00:18:26.971 5.689 - 5.713: 98.2665% ( 1) 00:18:26.971 5.807 - 5.831: 98.2744% ( 1) 00:18:26.971 5.926 - 5.950: 98.2822% ( 1) 00:18:26.971 6.021 - 6.044: 98.2901% ( 1) 00:18:26.971 6.258 - 6.305: 98.2979% ( 1) 00:18:26.971 6.353 - 6.400: 98.3136% ( 2) 00:18:26.971 6.542 - 6.590: 98.3214% ( 1) 00:18:26.971 7.016 - 7.064: 98.3371% ( 2) 00:18:26.971 7.159 - 7.206: 98.3450% ( 1) 00:18:26.971 7.348 - 7.396: 98.3528% ( 1) 00:18:26.971 7.490 - 7.538: 98.3607% ( 1) 00:18:26.971 7.538 - 7.585: 98.3685% ( 1) 00:18:26.971 7.585 - 7.633: 98.3842% ( 2) 00:18:26.971 7.633 - 7.680: 98.3920% ( 1) 00:18:26.971 7.680 - 7.727: 98.3999% ( 1) 00:18:26.971 7.727 - 7.775: 98.4077% ( 1) 00:18:26.971 7.775 - 7.822: 98.4156% ( 1) 00:18:26.971 7.917 - 7.964: 98.4234% ( 1) 00:18:26.971 7.964 - 8.012: 98.4312% ( 1) 00:18:26.971 8.012 - 8.059: 98.4391% ( 1) 00:18:26.971 8.154 - 8.201: 98.4469% ( 1) 00:18:26.971 8.249 - 8.296: 98.4626% ( 2) 00:18:26.971 8.296 - 8.344: 98.4705% ( 1) 00:18:26.971 8.344 - 8.391: 98.4783% ( 1) 00:18:26.971 8.439 - 8.486: 98.4862% ( 1) 00:18:26.971 8.533 - 8.581: 98.5097% ( 3) 00:18:26.971 8.628 - 8.676: 98.5175% ( 1) 00:18:26.971 8.676 - 8.723: 98.5254% ( 1) 00:18:26.971 8.723 - 8.770: 98.5332% ( 1) 00:18:26.971 8.818 - 8.865: 98.5489% ( 2) 00:18:26.971 8.913 - 8.960: 98.5646% ( 2) 00:18:26.971 9.007 - 9.055: 98.5724% ( 1) 00:18:26.971 9.102 - 9.150: 98.5803% ( 1) 00:18:26.971 9.244 - 9.292: 98.5881% ( 1) 00:18:26.971 9.292 - 9.339: 98.5960% ( 1) 00:18:26.971 9.339 - 9.387: 98.6195% ( 3) 00:18:26.971 9.387 - 9.434: 98.6352% ( 2) 00:18:26.971 9.434 - 9.481: 98.6430% ( 1) 00:18:26.971 9.481 - 9.529: 98.6509% ( 1) 00:18:26.971 9.576 - 9.624: 98.6666% ( 2) 00:18:26.971 9.813 - 9.861: 98.6744% ( 1) 00:18:26.971 9.861 - 9.908: 98.6822% ( 1) 00:18:26.971 10.003 - 10.050: 98.7058% ( 3) 00:18:26.971 10.050 - 10.098: 98.7136% ( 1) 00:18:26.971 10.145 - 10.193: 98.7215% ( 1) 00:18:26.971 10.287 - 10.335: 98.7293% ( 1) 00:18:26.971 10.572 - 10.619: 98.7372% ( 1) 00:18:26.971 10.761 - 10.809: 98.7528% ( 2) 00:18:26.971 11.093 - 11.141: 98.7607% ( 1) 00:18:26.971 11.141 - 11.188: 98.7764% ( 2) 00:18:26.971 11.520 - 11.567: 98.7842% ( 1) 00:18:26.971 11.567 - 11.615: 98.7921% ( 1) 00:18:26.971 11.947 - 11.994: 98.7999% ( 1) 00:18:26.971 12.089 - 12.136: 98.8077% ( 1) 00:18:26.971 12.136 - 12.231: 98.8156% ( 1) 00:18:26.971 12.231 - 12.326: 98.8313% ( 2) 00:18:26.971 12.421 - 12.516: 98.8391% ( 1) 00:18:26.971 12.516 - 12.610: 98.8548% ( 2) 00:18:26.971 12.610 - 12.705: 98.8705% ( 2) 00:18:26.971 12.705 - 12.800: 98.8862% ( 2) 00:18:26.971 12.800 - 12.895: 98.8940% ( 1) 00:18:26.971 13.274 - 13.369: 98.9176% ( 3) 00:18:26.971 14.317 - 14.412: 98.9332% ( 2) 00:18:26.971 14.412 - 14.507: 98.9411% ( 1) 00:18:26.971 14.507 - 14.601: 98.9568% ( 2) 00:18:26.971 14.791 - 14.886: 98.9646% ( 1) 00:18:26.971 14.981 - 15.076: 98.9725% ( 1) 00:18:26.971 15.170 - 15.265: 98.9803% ( 1) 00:18:26.971 15.455 - 15.550: 98.9882% ( 1) 00:18:26.971 17.161 - 17.256: 99.0038% ( 2) 00:18:26.971 17.256 - 17.351: 99.0195% ( 2) 00:18:26.971 17.351 - 17.446: 99.0431% ( 3) 00:18:26.971 17.446 - 17.541: 99.0744% ( 4) 00:18:26.971 17.541 - 17.636: 99.1215% ( 6) 00:18:26.971 17.636 - 17.730: 99.1764% ( 7) 00:18:26.971 17.730 - 17.825: 99.1842% ( 1) 00:18:26.971 17.825 - 17.920: 99.2313% ( 6) 00:18:26.971 17.920 - 18.015: 99.3097% ( 10) 00:18:26.971 18.015 - 18.110: 99.3490% ( 5) 00:18:26.971 18.110 - 18.204: 99.3960% ( 6) 00:18:26.971 18.204 - 18.299: 99.4509% ( 7) 00:18:26.971 18.299 - 18.394: 99.5137% ( 8) 00:18:26.971 18.394 - 18.489: 99.5529% ( 5) 00:18:26.971 18.489 - 18.584: 99.6470% ( 12) 00:18:26.971 18.584 - 18.679: 99.7019% ( 7) 00:18:26.971 18.679 - 18.773: 99.7412% ( 5) 00:18:26.971 18.773 - 18.868: 99.7568% ( 2) 00:18:26.971 18.868 - 18.963: 99.8039% ( 6) 00:18:26.971 18.963 - 19.058: 99.8196% ( 2) 00:18:26.971 19.058 - 19.153: 99.8353% ( 2) 00:18:26.971 19.153 - 19.247: 99.8431% ( 1) 00:18:26.971 19.247 - 19.342: 99.8510% ( 1) 00:18:26.971 19.342 - 19.437: 99.8667% ( 2) 00:18:26.971 19.437 - 19.532: 99.8745% ( 1) 00:18:26.971 19.721 - 19.816: 99.8823% ( 1) 00:18:26.971 22.281 - 22.376: 99.8902% ( 1) 00:18:26.971 22.850 - 22.945: 99.8980% ( 1) 00:18:26.971 25.790 - 25.979: 99.9059% ( 1) 00:18:26.971 28.255 - 28.444: 99.9137% ( 1) 00:18:26.971 758.519 - 761.553: 99.9216% ( 1) 00:18:26.971 3980.705 - 4004.978: 99.9686% ( 6) 00:18:26.971 4004.978 - 4029.250: 99.9922% ( 3) 00:18:26.971 7961.410 - 8009.956: 100.0000% ( 1) 00:18:26.971 00:18:26.971 Complete histogram 00:18:26.971 ================== 00:18:26.971 Range in us Cumulative Count 00:18:26.971 2.050 - 2.062: 0.0078% ( 1) 00:18:26.971 2.062 - 2.074: 15.4130% ( 1964) 00:18:26.971 2.074 - 2.086: 47.2665% ( 4061) 00:18:26.971 2.086 - 2.098: 49.4941% ( 284) 00:18:26.971 2.098 - 2.110: 55.9652% ( 825) 00:18:26.971 2.110 - 2.121: 61.4087% ( 694) 00:18:26.971 2.121 - 2.133: 62.8285% ( 181) 00:18:26.971 2.133 - 2.145: 72.2096% ( 1196) 00:18:26.971 2.145 - 2.157: 78.1159% ( 753) 00:18:26.971 2.157 - 2.169: 78.7983% ( 87) 00:18:26.971 2.169 - 2.181: 81.0260% ( 284) 00:18:26.971 2.181 - 2.193: 82.2182% ( 152) 00:18:26.971 2.193 - 2.204: 82.7437% ( 67) 00:18:26.971 2.204 - 2.216: 86.1950% ( 440) 00:18:26.971 2.216 - 2.228: 89.6306% ( 438) 00:18:26.971 2.228 - 2.240: 91.4974% ( 238) 00:18:26.971 2.240 - 2.252: 92.9014% ( 179) 00:18:26.971 2.252 - 2.264: 93.6073% ( 90) 00:18:26.971 2.264 - 2.276: 93.7877% ( 23) 00:18:26.971 2.276 - 2.287: 94.1172% ( 42) 00:18:26.971 2.287 - 2.299: 94.5800% ( 59) 00:18:26.971 2.299 - 2.311: 95.0506% ( 60) 00:18:26.971 2.311 - 2.323: 95.3643% ( 40) 00:18:26.971 2.323 - 2.335: 95.4349% ( 9) 00:18:26.971 2.335 - 2.347: 95.5212% ( 11) 00:18:26.971 2.347 - 2.359: 95.5683% ( 6) 00:18:26.971 2.359 - 2.370: 95.6546% ( 11) 00:18:26.971 2.370 - 2.382: 95.8350% ( 23) 00:18:26.971 2.382 - 2.394: 96.1017% ( 34) 00:18:26.971 2.394 - 2.406: 96.3448% ( 31) 00:18:26.971 2.406 - 2.418: 96.4860% ( 18) 00:18:26.971 2.418 - 2.430: 96.6664% ( 23) 00:18:26.971 2.430 - 2.441: 96.8468% ( 23) 00:18:26.971 2.441 - 2.453: 97.0037% ( 20) 00:18:26.971 2.453 - 2.465: 97.2076% ( 26) 00:18:26.971 2.465 - 2.477: 97.3567% ( 19) 00:18:26.971 2.477 - 2.489: 97.5214% ( 21) 00:18:26.971 2.489 - 2.501: 97.7959% ( 35) 00:18:26.971 2.501 - 2.513: 97.9449% ( 19) 00:18:26.971 2.513 - 2.524: 98.0547% ( 14) 00:18:26.971 2.524 - 2.536: 98.1410% ( 11) 00:18:26.971 2.536 - 2.548: 98.2508% ( 14) 00:18:26.971 2.548 - 2.560: 98.3136% ( 8) 00:18:26.971 2.560 - 2.572: 98.3528% ( 5) 00:18:26.971 2.572 - 2.584: 98.3763% ( 3) 00:18:26.971 2.584 - 2.596: 98.3999% ( 3) 00:18:26.971 2.596 - 2.607: 98.4234% ( 3) 00:18:26.971 2.607 - 2.619: 98.4391% ( 2) 00:18:26.971 2.631 - 2.643: 98.4469% ( 1) 00:18:26.971 2.679 - 2.690: 98.4548% ( 1) 00:18:26.971 2.809 - 2.821: 98.4626% ( 1) 00:18:26.971 2.844 - 2.856: 98.4705% ( 1) 00:18:26.971 3.129 - 3.153: 98.4783% ( 1) 00:18:26.971 3.437 - 3.461: 98.4862% ( 1) 00:18:26.971 3.461 - 3.484: 98.4940% ( 1) 00:18:26.971 3.484 - 3.508: 98.5018% ( 1) 00:18:26.971 3.508 - 3.532: 98.5097% ( 1) 00:18:26.971 3.556 - 3.579: 98.5175% ( 1) 00:18:26.971 3.579 - 3.603: 98.5254% ( 1) 00:18:26.971 3.603 - 3.627: 98.5332% ( 1) 00:18:26.971 3.627 - 3.650: 98.5411% ( 1) 00:18:26.971 3.650 - 3.674: 98.5489% ( 1) 00:18:26.971 3.674 - 3.698: 98.5646% ( 2) 00:18:26.971 3.721 - 3.745: 98.5724% ( 1) 00:18:26.971 3.745 - 3.769: 98.6038% ( 4) 00:18:26.971 3.793 - 3.816: 98.6117% ( 1) 00:18:26.971 3.816 - 3.840: 98.6273% ( 2) 00:18:26.971 3.887 - 3.911: 98.6352% ( 1) 00:18:26.972 3.935 - 3.959: 98.6430% ( 1) 00:18:26.972 3.982 - 4.006: 98.6509% ( 1) 00:18:26.972 4.124 - 4.148: 98.6587% ( 1) 00:18:26.972 4.148 - 4.172: 98.6744% ( 2) 00:18:26.972 4.219 - 4.243: 98.6901% ( 2) 00:18:27.229 5.665 - 5.689: 9[2024-11-17 18:39:13.546509] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:27.229 8.6979% ( 1) 00:18:27.229 6.021 - 6.044: 98.7058% ( 1) 00:18:27.229 6.163 - 6.210: 98.7136% ( 1) 00:18:27.229 6.258 - 6.305: 98.7215% ( 1) 00:18:27.229 6.353 - 6.400: 98.7293% ( 1) 00:18:27.229 6.495 - 6.542: 98.7450% ( 2) 00:18:27.229 6.590 - 6.637: 98.7528% ( 1) 00:18:27.229 6.874 - 6.921: 98.7607% ( 1) 00:18:27.229 7.064 - 7.111: 98.7685% ( 1) 00:18:27.229 7.159 - 7.206: 98.7764% ( 1) 00:18:27.229 7.348 - 7.396: 98.7842% ( 1) 00:18:27.229 7.396 - 7.443: 98.7921% ( 1) 00:18:27.229 7.538 - 7.585: 98.7999% ( 1) 00:18:27.229 8.296 - 8.344: 98.8077% ( 1) 00:18:27.229 8.818 - 8.865: 98.8156% ( 1) 00:18:27.229 15.455 - 15.550: 98.8234% ( 1) 00:18:27.229 15.550 - 15.644: 98.8313% ( 1) 00:18:27.229 15.739 - 15.834: 98.8470% ( 2) 00:18:27.229 15.834 - 15.929: 98.8627% ( 2) 00:18:27.229 15.929 - 16.024: 98.8783% ( 2) 00:18:27.229 16.024 - 16.119: 98.9019% ( 3) 00:18:27.229 16.119 - 16.213: 98.9097% ( 1) 00:18:27.229 16.213 - 16.308: 98.9489% ( 5) 00:18:27.229 16.308 - 16.403: 98.9725% ( 3) 00:18:27.229 16.403 - 16.498: 98.9960% ( 3) 00:18:27.229 16.498 - 16.593: 99.0352% ( 5) 00:18:27.229 16.593 - 16.687: 99.0666% ( 4) 00:18:27.229 16.687 - 16.782: 99.1372% ( 9) 00:18:27.229 16.782 - 16.877: 99.1686% ( 4) 00:18:27.229 16.877 - 16.972: 99.1999% ( 4) 00:18:27.229 16.972 - 17.067: 99.2156% ( 2) 00:18:27.229 17.067 - 17.161: 99.2235% ( 1) 00:18:27.229 17.161 - 17.256: 99.2313% ( 1) 00:18:27.229 17.256 - 17.351: 99.2548% ( 3) 00:18:27.229 17.351 - 17.446: 99.2705% ( 2) 00:18:27.229 17.446 - 17.541: 99.2862% ( 2) 00:18:27.229 17.541 - 17.636: 99.3019% ( 2) 00:18:27.229 17.636 - 17.730: 99.3097% ( 1) 00:18:27.229 17.730 - 17.825: 99.3176% ( 1) 00:18:27.230 17.825 - 17.920: 99.3254% ( 1) 00:18:27.230 18.015 - 18.110: 99.3411% ( 2) 00:18:27.230 18.204 - 18.299: 99.3568% ( 2) 00:18:27.230 18.394 - 18.489: 99.3725% ( 2) 00:18:27.230 18.773 - 18.868: 99.3803% ( 1) 00:18:27.230 50.441 - 50.821: 99.3882% ( 1) 00:18:27.230 2014.625 - 2026.761: 99.3960% ( 1) 00:18:27.230 3155.437 - 3179.710: 99.4039% ( 1) 00:18:27.230 3980.705 - 4004.978: 99.8353% ( 55) 00:18:27.230 4004.978 - 4029.250: 99.9843% ( 19) 00:18:27.230 7961.410 - 8009.956: 100.0000% ( 2) 00:18:27.230 00:18:27.230 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:27.230 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:27.230 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:27.230 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:27.230 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:27.487 [ 00:18:27.487 { 00:18:27.487 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:27.487 "subtype": "Discovery", 00:18:27.487 "listen_addresses": [], 00:18:27.487 "allow_any_host": true, 00:18:27.487 "hosts": [] 00:18:27.487 }, 00:18:27.487 { 00:18:27.487 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:27.487 "subtype": "NVMe", 00:18:27.487 "listen_addresses": [ 00:18:27.487 { 00:18:27.487 "trtype": "VFIOUSER", 00:18:27.487 "adrfam": "IPv4", 00:18:27.487 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:27.487 "trsvcid": "0" 00:18:27.487 } 00:18:27.487 ], 00:18:27.487 "allow_any_host": true, 00:18:27.487 "hosts": [], 00:18:27.487 "serial_number": "SPDK1", 00:18:27.487 "model_number": "SPDK bdev Controller", 00:18:27.487 "max_namespaces": 32, 00:18:27.487 "min_cntlid": 1, 00:18:27.487 "max_cntlid": 65519, 00:18:27.487 "namespaces": [ 00:18:27.487 { 00:18:27.487 "nsid": 1, 00:18:27.487 "bdev_name": "Malloc1", 00:18:27.487 "name": "Malloc1", 00:18:27.487 "nguid": "8DFD2ECCCFC1488C9C69894E4E29EE35", 00:18:27.487 "uuid": "8dfd2ecc-cfc1-488c-9c69-894e4e29ee35" 00:18:27.487 }, 00:18:27.487 { 00:18:27.487 "nsid": 2, 00:18:27.487 "bdev_name": "Malloc3", 00:18:27.487 "name": "Malloc3", 00:18:27.487 "nguid": "AC32ABB148AA4F17A716654EC7591E49", 00:18:27.487 "uuid": "ac32abb1-48aa-4f17-a716-654ec7591e49" 00:18:27.487 } 00:18:27.487 ] 00:18:27.488 }, 00:18:27.488 { 00:18:27.488 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:27.488 "subtype": "NVMe", 00:18:27.488 "listen_addresses": [ 00:18:27.488 { 00:18:27.488 "trtype": "VFIOUSER", 00:18:27.488 "adrfam": "IPv4", 00:18:27.488 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:27.488 "trsvcid": "0" 00:18:27.488 } 00:18:27.488 ], 00:18:27.488 "allow_any_host": true, 00:18:27.488 "hosts": [], 00:18:27.488 "serial_number": "SPDK2", 00:18:27.488 "model_number": "SPDK bdev Controller", 00:18:27.488 "max_namespaces": 32, 00:18:27.488 "min_cntlid": 1, 00:18:27.488 "max_cntlid": 65519, 00:18:27.488 "namespaces": [ 00:18:27.488 { 00:18:27.488 "nsid": 1, 00:18:27.488 "bdev_name": "Malloc2", 00:18:27.488 "name": "Malloc2", 00:18:27.488 "nguid": "98C3A36319FB4D8983DB97C6140BBA7E", 00:18:27.488 "uuid": "98c3a363-19fb-4d89-83db-97c6140bba7e" 00:18:27.488 } 00:18:27.488 ] 00:18:27.488 } 00:18:27.488 ] 00:18:27.488 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:27.488 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=724980 00:18:27.488 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:27.488 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:27.488 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:27.488 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:27.488 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:27.488 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:27.488 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:27.488 18:39:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:27.746 [2024-11-17 18:39:14.085202] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:27.746 Malloc4 00:18:27.746 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:28.003 [2024-11-17 18:39:14.471132] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:28.003 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:28.003 Asynchronous Event Request test 00:18:28.003 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:28.003 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:28.003 Registering asynchronous event callbacks... 00:18:28.003 Starting namespace attribute notice tests for all controllers... 00:18:28.003 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:28.003 aer_cb - Changed Namespace 00:18:28.003 Cleaning up... 00:18:28.261 [ 00:18:28.261 { 00:18:28.261 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:28.261 "subtype": "Discovery", 00:18:28.261 "listen_addresses": [], 00:18:28.261 "allow_any_host": true, 00:18:28.261 "hosts": [] 00:18:28.261 }, 00:18:28.261 { 00:18:28.261 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:28.261 "subtype": "NVMe", 00:18:28.261 "listen_addresses": [ 00:18:28.261 { 00:18:28.261 "trtype": "VFIOUSER", 00:18:28.261 "adrfam": "IPv4", 00:18:28.261 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:28.261 "trsvcid": "0" 00:18:28.261 } 00:18:28.261 ], 00:18:28.261 "allow_any_host": true, 00:18:28.261 "hosts": [], 00:18:28.261 "serial_number": "SPDK1", 00:18:28.261 "model_number": "SPDK bdev Controller", 00:18:28.261 "max_namespaces": 32, 00:18:28.261 "min_cntlid": 1, 00:18:28.261 "max_cntlid": 65519, 00:18:28.261 "namespaces": [ 00:18:28.261 { 00:18:28.261 "nsid": 1, 00:18:28.261 "bdev_name": "Malloc1", 00:18:28.261 "name": "Malloc1", 00:18:28.261 "nguid": "8DFD2ECCCFC1488C9C69894E4E29EE35", 00:18:28.261 "uuid": "8dfd2ecc-cfc1-488c-9c69-894e4e29ee35" 00:18:28.261 }, 00:18:28.261 { 00:18:28.261 "nsid": 2, 00:18:28.261 "bdev_name": "Malloc3", 00:18:28.261 "name": "Malloc3", 00:18:28.261 "nguid": "AC32ABB148AA4F17A716654EC7591E49", 00:18:28.261 "uuid": "ac32abb1-48aa-4f17-a716-654ec7591e49" 00:18:28.261 } 00:18:28.261 ] 00:18:28.261 }, 00:18:28.261 { 00:18:28.261 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:28.261 "subtype": "NVMe", 00:18:28.262 "listen_addresses": [ 00:18:28.262 { 00:18:28.262 "trtype": "VFIOUSER", 00:18:28.262 "adrfam": "IPv4", 00:18:28.262 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:28.262 "trsvcid": "0" 00:18:28.262 } 00:18:28.262 ], 00:18:28.262 "allow_any_host": true, 00:18:28.262 "hosts": [], 00:18:28.262 "serial_number": "SPDK2", 00:18:28.262 "model_number": "SPDK bdev Controller", 00:18:28.262 "max_namespaces": 32, 00:18:28.262 "min_cntlid": 1, 00:18:28.262 "max_cntlid": 65519, 00:18:28.262 "namespaces": [ 00:18:28.262 { 00:18:28.262 "nsid": 1, 00:18:28.262 "bdev_name": "Malloc2", 00:18:28.262 "name": "Malloc2", 00:18:28.262 "nguid": "98C3A36319FB4D8983DB97C6140BBA7E", 00:18:28.262 "uuid": "98c3a363-19fb-4d89-83db-97c6140bba7e" 00:18:28.262 }, 00:18:28.262 { 00:18:28.262 "nsid": 2, 00:18:28.262 "bdev_name": "Malloc4", 00:18:28.262 "name": "Malloc4", 00:18:28.262 "nguid": "9813B40D109148ECABE8873D6AF4FF9B", 00:18:28.262 "uuid": "9813b40d-1091-48ec-abe8-873d6af4ff9b" 00:18:28.262 } 00:18:28.262 ] 00:18:28.262 } 00:18:28.262 ] 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 724980 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 718776 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 718776 ']' 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 718776 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 718776 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 718776' 00:18:28.262 killing process with pid 718776 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 718776 00:18:28.262 18:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 718776 00:18:28.829 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=725122 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 725122' 00:18:28.830 Process pid: 725122 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 725122 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 725122 ']' 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:28.830 [2024-11-17 18:39:15.167748] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:28.830 [2024-11-17 18:39:15.168833] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:28.830 [2024-11-17 18:39:15.168904] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.830 [2024-11-17 18:39:15.235445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:28.830 [2024-11-17 18:39:15.276847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.830 [2024-11-17 18:39:15.276905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.830 [2024-11-17 18:39:15.276933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.830 [2024-11-17 18:39:15.276944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.830 [2024-11-17 18:39:15.276954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.830 [2024-11-17 18:39:15.278365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.830 [2024-11-17 18:39:15.278473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.830 [2024-11-17 18:39:15.278549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:28.830 [2024-11-17 18:39:15.278551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.830 [2024-11-17 18:39:15.358106] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:28.830 [2024-11-17 18:39:15.358342] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:28.830 [2024-11-17 18:39:15.358597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:28.830 [2024-11-17 18:39:15.359200] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:28.830 [2024-11-17 18:39:15.359425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:28.830 18:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:30.205 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:30.205 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:30.205 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:30.205 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:30.205 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:30.205 18:39:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:30.773 Malloc1 00:18:30.773 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:31.031 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:31.289 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:31.547 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:31.547 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:31.547 18:39:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:31.805 Malloc2 00:18:31.805 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:32.063 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:32.320 18:39:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 725122 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 725122 ']' 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 725122 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725122 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725122' 00:18:32.578 killing process with pid 725122 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 725122 00:18:32.578 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 725122 00:18:32.835 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:32.835 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:32.835 00:18:32.835 real 0m53.354s 00:18:32.835 user 3m26.331s 00:18:32.835 sys 0m3.878s 00:18:32.835 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.835 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:32.835 ************************************ 00:18:32.835 END TEST nvmf_vfio_user 00:18:32.835 ************************************ 00:18:32.835 18:39:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:32.836 18:39:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:32.836 18:39:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.836 18:39:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:32.836 ************************************ 00:18:32.836 START TEST nvmf_vfio_user_nvme_compliance 00:18:32.836 ************************************ 00:18:32.836 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:32.836 * Looking for test storage... 00:18:32.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:32.836 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:32.836 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:18:32.836 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:33.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.094 --rc genhtml_branch_coverage=1 00:18:33.094 --rc genhtml_function_coverage=1 00:18:33.094 --rc genhtml_legend=1 00:18:33.094 --rc geninfo_all_blocks=1 00:18:33.094 --rc geninfo_unexecuted_blocks=1 00:18:33.094 00:18:33.094 ' 00:18:33.094 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:33.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.095 --rc genhtml_branch_coverage=1 00:18:33.095 --rc genhtml_function_coverage=1 00:18:33.095 --rc genhtml_legend=1 00:18:33.095 --rc geninfo_all_blocks=1 00:18:33.095 --rc geninfo_unexecuted_blocks=1 00:18:33.095 00:18:33.095 ' 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:33.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.095 --rc genhtml_branch_coverage=1 00:18:33.095 --rc genhtml_function_coverage=1 00:18:33.095 --rc genhtml_legend=1 00:18:33.095 --rc geninfo_all_blocks=1 00:18:33.095 --rc geninfo_unexecuted_blocks=1 00:18:33.095 00:18:33.095 ' 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:33.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.095 --rc genhtml_branch_coverage=1 00:18:33.095 --rc genhtml_function_coverage=1 00:18:33.095 --rc genhtml_legend=1 00:18:33.095 --rc geninfo_all_blocks=1 00:18:33.095 --rc geninfo_unexecuted_blocks=1 00:18:33.095 00:18:33.095 ' 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:33.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=725725 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 725725' 00:18:33.095 Process pid: 725725 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 725725 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 725725 ']' 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.095 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:33.095 [2024-11-17 18:39:19.561847] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:18:33.095 [2024-11-17 18:39:19.561930] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.095 [2024-11-17 18:39:19.630179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:33.353 [2024-11-17 18:39:19.680515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.353 [2024-11-17 18:39:19.680564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.353 [2024-11-17 18:39:19.680593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.353 [2024-11-17 18:39:19.680604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.353 [2024-11-17 18:39:19.680614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.353 [2024-11-17 18:39:19.682158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.353 [2024-11-17 18:39:19.682226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.353 [2024-11-17 18:39:19.682229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.353 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.353 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:33.353 18:39:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:34.286 malloc0 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.286 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:34.544 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.544 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:34.544 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.544 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:34.544 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.544 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:34.544 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.544 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:34.544 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.544 18:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:34.544 00:18:34.544 00:18:34.544 CUnit - A unit testing framework for C - Version 2.1-3 00:18:34.544 http://cunit.sourceforge.net/ 00:18:34.544 00:18:34.544 00:18:34.544 Suite: nvme_compliance 00:18:34.544 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-17 18:39:21.056212] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:34.544 [2024-11-17 18:39:21.057680] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:34.544 [2024-11-17 18:39:21.057705] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:34.544 [2024-11-17 18:39:21.057716] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:34.544 [2024-11-17 18:39:21.059230] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:34.544 passed 00:18:34.802 Test: admin_identify_ctrlr_verify_fused ...[2024-11-17 18:39:21.146856] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:34.802 [2024-11-17 18:39:21.149877] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:34.802 passed 00:18:34.802 Test: admin_identify_ns ...[2024-11-17 18:39:21.235241] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:34.802 [2024-11-17 18:39:21.294691] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:34.802 [2024-11-17 18:39:21.302694] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:34.802 [2024-11-17 18:39:21.323819] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:34.802 passed 00:18:35.059 Test: admin_get_features_mandatory_features ...[2024-11-17 18:39:21.410034] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:35.059 [2024-11-17 18:39:21.413041] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:35.059 passed 00:18:35.059 Test: admin_get_features_optional_features ...[2024-11-17 18:39:21.495555] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:35.059 [2024-11-17 18:39:21.498573] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:35.059 passed 00:18:35.059 Test: admin_set_features_number_of_queues ...[2024-11-17 18:39:21.581746] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:35.316 [2024-11-17 18:39:21.688806] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:35.317 passed 00:18:35.317 Test: admin_get_log_page_mandatory_logs ...[2024-11-17 18:39:21.773498] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:35.317 [2024-11-17 18:39:21.776522] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:35.317 passed 00:18:35.317 Test: admin_get_log_page_with_lpo ...[2024-11-17 18:39:21.856670] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:35.573 [2024-11-17 18:39:21.926705] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:35.573 [2024-11-17 18:39:21.939768] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:35.573 passed 00:18:35.573 Test: fabric_property_get ...[2024-11-17 18:39:22.023401] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:35.573 [2024-11-17 18:39:22.024696] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:35.573 [2024-11-17 18:39:22.026425] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:35.573 passed 00:18:35.573 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-17 18:39:22.107956] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:35.574 [2024-11-17 18:39:22.109239] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:35.574 [2024-11-17 18:39:22.112998] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:35.574 passed 00:18:35.830 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-17 18:39:22.194185] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:35.830 [2024-11-17 18:39:22.281697] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:35.830 [2024-11-17 18:39:22.297699] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:35.830 [2024-11-17 18:39:22.302799] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:35.830 passed 00:18:35.830 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-17 18:39:22.383444] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:35.830 [2024-11-17 18:39:22.384766] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:35.830 [2024-11-17 18:39:22.388474] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:36.088 passed 00:18:36.088 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-17 18:39:22.472617] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:36.088 [2024-11-17 18:39:22.549715] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:36.088 [2024-11-17 18:39:22.573703] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:36.088 [2024-11-17 18:39:22.578789] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:36.088 passed 00:18:36.088 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-17 18:39:22.663416] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:36.345 [2024-11-17 18:39:22.664766] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:36.345 [2024-11-17 18:39:22.664806] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:36.345 [2024-11-17 18:39:22.666439] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:36.345 passed 00:18:36.345 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-17 18:39:22.749709] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:36.346 [2024-11-17 18:39:22.843697] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:36.346 [2024-11-17 18:39:22.851712] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:36.346 [2024-11-17 18:39:22.859689] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:36.346 [2024-11-17 18:39:22.867681] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:36.346 [2024-11-17 18:39:22.896796] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:36.603 passed 00:18:36.603 Test: admin_create_io_sq_verify_pc ...[2024-11-17 18:39:22.980742] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:36.603 [2024-11-17 18:39:22.994700] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:36.603 [2024-11-17 18:39:23.012110] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:36.603 passed 00:18:36.603 Test: admin_create_io_qp_max_qps ...[2024-11-17 18:39:23.098689] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:37.975 [2024-11-17 18:39:24.201693] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:38.233 [2024-11-17 18:39:24.592419] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.233 passed 00:18:38.233 Test: admin_create_io_sq_shared_cq ...[2024-11-17 18:39:24.674737] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:38.233 [2024-11-17 18:39:24.808698] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:38.491 [2024-11-17 18:39:24.845788] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:38.491 passed 00:18:38.491 00:18:38.491 Run Summary: Type Total Ran Passed Failed Inactive 00:18:38.491 suites 1 1 n/a 0 0 00:18:38.491 tests 18 18 18 0 0 00:18:38.491 asserts 360 360 360 0 n/a 00:18:38.491 00:18:38.491 Elapsed time = 1.571 seconds 00:18:38.491 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 725725 00:18:38.491 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 725725 ']' 00:18:38.491 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 725725 00:18:38.491 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:38.491 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.491 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725725 00:18:38.491 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.491 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.491 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725725' 00:18:38.491 killing process with pid 725725 00:18:38.491 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 725725 00:18:38.491 18:39:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 725725 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:38.749 00:18:38.749 real 0m5.779s 00:18:38.749 user 0m16.255s 00:18:38.749 sys 0m0.580s 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:38.749 ************************************ 00:18:38.749 END TEST nvmf_vfio_user_nvme_compliance 00:18:38.749 ************************************ 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:38.749 ************************************ 00:18:38.749 START TEST nvmf_vfio_user_fuzz 00:18:38.749 ************************************ 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:38.749 * Looking for test storage... 00:18:38.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.749 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:38.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.750 --rc genhtml_branch_coverage=1 00:18:38.750 --rc genhtml_function_coverage=1 00:18:38.750 --rc genhtml_legend=1 00:18:38.750 --rc geninfo_all_blocks=1 00:18:38.750 --rc geninfo_unexecuted_blocks=1 00:18:38.750 00:18:38.750 ' 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:38.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.750 --rc genhtml_branch_coverage=1 00:18:38.750 --rc genhtml_function_coverage=1 00:18:38.750 --rc genhtml_legend=1 00:18:38.750 --rc geninfo_all_blocks=1 00:18:38.750 --rc geninfo_unexecuted_blocks=1 00:18:38.750 00:18:38.750 ' 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:38.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.750 --rc genhtml_branch_coverage=1 00:18:38.750 --rc genhtml_function_coverage=1 00:18:38.750 --rc genhtml_legend=1 00:18:38.750 --rc geninfo_all_blocks=1 00:18:38.750 --rc geninfo_unexecuted_blocks=1 00:18:38.750 00:18:38.750 ' 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:38.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.750 --rc genhtml_branch_coverage=1 00:18:38.750 --rc genhtml_function_coverage=1 00:18:38.750 --rc genhtml_legend=1 00:18:38.750 --rc geninfo_all_blocks=1 00:18:38.750 --rc geninfo_unexecuted_blocks=1 00:18:38.750 00:18:38.750 ' 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.750 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:39.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=726459 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 726459' 00:18:39.009 Process pid: 726459 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 726459 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 726459 ']' 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.009 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:39.267 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.267 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:39.267 18:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:40.199 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:40.199 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.199 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:40.199 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.199 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:40.199 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:40.199 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.199 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:40.199 malloc0 00:18:40.199 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:40.200 18:39:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:12.286 Fuzzing completed. Shutting down the fuzz application 00:19:12.286 00:19:12.286 Dumping successful admin opcodes: 00:19:12.286 8, 9, 10, 24, 00:19:12.286 Dumping successful io opcodes: 00:19:12.286 0, 00:19:12.286 NS: 0x20000081ef00 I/O qp, Total commands completed: 674024, total successful commands: 2621, random_seed: 2298121280 00:19:12.286 NS: 0x20000081ef00 admin qp, Total commands completed: 89086, total successful commands: 714, random_seed: 2328383104 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 726459 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 726459 ']' 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 726459 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 726459 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 726459' 00:19:12.286 killing process with pid 726459 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 726459 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 726459 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:12.286 00:19:12.286 real 0m32.149s 00:19:12.286 user 0m30.113s 00:19:12.286 sys 0m29.631s 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:12.286 ************************************ 00:19:12.286 END TEST nvmf_vfio_user_fuzz 00:19:12.286 ************************************ 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.286 ************************************ 00:19:12.286 START TEST nvmf_auth_target 00:19:12.286 ************************************ 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:12.286 * Looking for test storage... 00:19:12.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.286 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:12.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.286 --rc genhtml_branch_coverage=1 00:19:12.287 --rc genhtml_function_coverage=1 00:19:12.287 --rc genhtml_legend=1 00:19:12.287 --rc geninfo_all_blocks=1 00:19:12.287 --rc geninfo_unexecuted_blocks=1 00:19:12.287 00:19:12.287 ' 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.287 --rc genhtml_branch_coverage=1 00:19:12.287 --rc genhtml_function_coverage=1 00:19:12.287 --rc genhtml_legend=1 00:19:12.287 --rc geninfo_all_blocks=1 00:19:12.287 --rc geninfo_unexecuted_blocks=1 00:19:12.287 00:19:12.287 ' 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.287 --rc genhtml_branch_coverage=1 00:19:12.287 --rc genhtml_function_coverage=1 00:19:12.287 --rc genhtml_legend=1 00:19:12.287 --rc geninfo_all_blocks=1 00:19:12.287 --rc geninfo_unexecuted_blocks=1 00:19:12.287 00:19:12.287 ' 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.287 --rc genhtml_branch_coverage=1 00:19:12.287 --rc genhtml_function_coverage=1 00:19:12.287 --rc genhtml_legend=1 00:19:12.287 --rc geninfo_all_blocks=1 00:19:12.287 --rc geninfo_unexecuted_blocks=1 00:19:12.287 00:19:12.287 ' 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:12.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:12.287 18:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:13.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:13.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:13.222 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:13.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:13.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:13.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:13.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:19:13.223 00:19:13.223 --- 10.0.0.2 ping statistics --- 00:19:13.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.223 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:13.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:13.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:19:13.223 00:19:13.223 --- 10.0.0.1 ping statistics --- 00:19:13.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.223 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=731917 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 731917 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 731917 ']' 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.223 18:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.819 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.819 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:13.819 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:13.819 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:13.819 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.819 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.819 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=731938 00:19:13.819 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:13.819 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:13.819 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:13.819 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d81a4f67bd227ae8f129ae73f1e994371e3a105b9776ec5e 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Gyz 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d81a4f67bd227ae8f129ae73f1e994371e3a105b9776ec5e 0 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d81a4f67bd227ae8f129ae73f1e994371e3a105b9776ec5e 0 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d81a4f67bd227ae8f129ae73f1e994371e3a105b9776ec5e 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Gyz 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Gyz 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Gyz 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=489165e2874cd5710b435b455ec518252c359936062172cf2671ebe868e2ae2e 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.BMl 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 489165e2874cd5710b435b455ec518252c359936062172cf2671ebe868e2ae2e 3 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 489165e2874cd5710b435b455ec518252c359936062172cf2671ebe868e2ae2e 3 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=489165e2874cd5710b435b455ec518252c359936062172cf2671ebe868e2ae2e 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.BMl 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.BMl 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.BMl 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e841195952c338e06fabe182f89ae643 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.QvV 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e841195952c338e06fabe182f89ae643 1 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e841195952c338e06fabe182f89ae643 1 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e841195952c338e06fabe182f89ae643 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.QvV 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.QvV 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.QvV 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f3e4a4dbbacbddf37911b98556a94ad2a060fa5934b14b7b 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.oFv 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f3e4a4dbbacbddf37911b98556a94ad2a060fa5934b14b7b 2 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f3e4a4dbbacbddf37911b98556a94ad2a060fa5934b14b7b 2 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f3e4a4dbbacbddf37911b98556a94ad2a060fa5934b14b7b 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.oFv 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.oFv 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.oFv 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=51b14974ee71aa313f5542d52198c65942bfed5a98301f61 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3cD 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 51b14974ee71aa313f5542d52198c65942bfed5a98301f61 2 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 51b14974ee71aa313f5542d52198c65942bfed5a98301f61 2 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=51b14974ee71aa313f5542d52198c65942bfed5a98301f61 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3cD 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3cD 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.3cD 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:13.820 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=93a29d080fac61e6b432ae4412007a6f 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.zAl 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 93a29d080fac61e6b432ae4412007a6f 1 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 93a29d080fac61e6b432ae4412007a6f 1 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=93a29d080fac61e6b432ae4412007a6f 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:13.821 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.zAl 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.zAl 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.zAl 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9b37199c0bceb35aa9988e9fb05534bf87deb8287a9ab647c0fe29359b5cd69a 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PPY 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9b37199c0bceb35aa9988e9fb05534bf87deb8287a9ab647c0fe29359b5cd69a 3 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9b37199c0bceb35aa9988e9fb05534bf87deb8287a9ab647c0fe29359b5cd69a 3 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9b37199c0bceb35aa9988e9fb05534bf87deb8287a9ab647c0fe29359b5cd69a 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PPY 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PPY 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.PPY 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 731917 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 731917 ']' 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.099 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.357 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.357 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:14.357 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 731938 /var/tmp/host.sock 00:19:14.357 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 731938 ']' 00:19:14.357 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:14.357 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.357 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:14.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:14.357 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.357 18:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.614 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.614 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:14.614 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:14.614 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.614 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.614 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.614 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:14.614 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Gyz 00:19:14.615 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.615 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.615 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.615 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Gyz 00:19:14.615 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Gyz 00:19:14.872 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.BMl ]] 00:19:14.872 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BMl 00:19:14.872 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.872 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.872 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.872 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BMl 00:19:14.872 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BMl 00:19:15.128 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:15.128 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QvV 00:19:15.128 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.128 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.128 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.128 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.QvV 00:19:15.128 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.QvV 00:19:15.385 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.oFv ]] 00:19:15.385 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oFv 00:19:15.385 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.385 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.385 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.385 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oFv 00:19:15.385 18:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oFv 00:19:15.643 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:15.643 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3cD 00:19:15.643 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.643 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.643 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.643 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.3cD 00:19:15.643 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.3cD 00:19:15.901 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.zAl ]] 00:19:15.901 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zAl 00:19:15.901 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.901 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.901 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.901 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zAl 00:19:15.901 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zAl 00:19:16.159 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:16.159 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PPY 00:19:16.159 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.159 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.159 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.159 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.PPY 00:19:16.159 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.PPY 00:19:16.416 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:16.416 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:16.416 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.416 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.416 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:16.416 18:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:16.674 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:16.674 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.674 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:16.674 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:16.674 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.674 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.674 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.674 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.674 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.932 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.932 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.932 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.932 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.190 00:19:17.190 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.190 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.190 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.448 { 00:19:17.448 "cntlid": 1, 00:19:17.448 "qid": 0, 00:19:17.448 "state": "enabled", 00:19:17.448 "thread": "nvmf_tgt_poll_group_000", 00:19:17.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:17.448 "listen_address": { 00:19:17.448 "trtype": "TCP", 00:19:17.448 "adrfam": "IPv4", 00:19:17.448 "traddr": "10.0.0.2", 00:19:17.448 "trsvcid": "4420" 00:19:17.448 }, 00:19:17.448 "peer_address": { 00:19:17.448 "trtype": "TCP", 00:19:17.448 "adrfam": "IPv4", 00:19:17.448 "traddr": "10.0.0.1", 00:19:17.448 "trsvcid": "50124" 00:19:17.448 }, 00:19:17.448 "auth": { 00:19:17.448 "state": "completed", 00:19:17.448 "digest": "sha256", 00:19:17.448 "dhgroup": "null" 00:19:17.448 } 00:19:17.448 } 00:19:17.448 ]' 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.448 18:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.706 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:19:17.706 18:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:19:18.638 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.639 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.639 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.639 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.639 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.639 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.639 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:18.639 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.897 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.463 00:19:19.463 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.463 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.463 18:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.721 { 00:19:19.721 "cntlid": 3, 00:19:19.721 "qid": 0, 00:19:19.721 "state": "enabled", 00:19:19.721 "thread": "nvmf_tgt_poll_group_000", 00:19:19.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:19.721 "listen_address": { 00:19:19.721 "trtype": "TCP", 00:19:19.721 "adrfam": "IPv4", 00:19:19.721 "traddr": "10.0.0.2", 00:19:19.721 "trsvcid": "4420" 00:19:19.721 }, 00:19:19.721 "peer_address": { 00:19:19.721 "trtype": "TCP", 00:19:19.721 "adrfam": "IPv4", 00:19:19.721 "traddr": "10.0.0.1", 00:19:19.721 "trsvcid": "50142" 00:19:19.721 }, 00:19:19.721 "auth": { 00:19:19.721 "state": "completed", 00:19:19.721 "digest": "sha256", 00:19:19.721 "dhgroup": "null" 00:19:19.721 } 00:19:19.721 } 00:19:19.721 ]' 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.721 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.979 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:19:19.979 18:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:19:20.912 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.912 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.913 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.913 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.913 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.913 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.913 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:20.913 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.170 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.171 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.428 00:19:21.428 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.428 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.428 18:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.686 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.944 { 00:19:21.944 "cntlid": 5, 00:19:21.944 "qid": 0, 00:19:21.944 "state": "enabled", 00:19:21.944 "thread": "nvmf_tgt_poll_group_000", 00:19:21.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:21.944 "listen_address": { 00:19:21.944 "trtype": "TCP", 00:19:21.944 "adrfam": "IPv4", 00:19:21.944 "traddr": "10.0.0.2", 00:19:21.944 "trsvcid": "4420" 00:19:21.944 }, 00:19:21.944 "peer_address": { 00:19:21.944 "trtype": "TCP", 00:19:21.944 "adrfam": "IPv4", 00:19:21.944 "traddr": "10.0.0.1", 00:19:21.944 "trsvcid": "42158" 00:19:21.944 }, 00:19:21.944 "auth": { 00:19:21.944 "state": "completed", 00:19:21.944 "digest": "sha256", 00:19:21.944 "dhgroup": "null" 00:19:21.944 } 00:19:21.944 } 00:19:21.944 ]' 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.944 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.203 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:19:22.203 18:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:19:23.136 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.136 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.136 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.136 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.136 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.136 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.136 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:23.136 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.394 18:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.652 00:19:23.652 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.652 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.652 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.910 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.910 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.910 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.910 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.910 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.910 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.910 { 00:19:23.910 "cntlid": 7, 00:19:23.910 "qid": 0, 00:19:23.910 "state": "enabled", 00:19:23.910 "thread": "nvmf_tgt_poll_group_000", 00:19:23.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:23.910 "listen_address": { 00:19:23.910 "trtype": "TCP", 00:19:23.910 "adrfam": "IPv4", 00:19:23.910 "traddr": "10.0.0.2", 00:19:23.910 "trsvcid": "4420" 00:19:23.910 }, 00:19:23.910 "peer_address": { 00:19:23.910 "trtype": "TCP", 00:19:23.910 "adrfam": "IPv4", 00:19:23.910 "traddr": "10.0.0.1", 00:19:23.910 "trsvcid": "42186" 00:19:23.910 }, 00:19:23.910 "auth": { 00:19:23.910 "state": "completed", 00:19:23.910 "digest": "sha256", 00:19:23.910 "dhgroup": "null" 00:19:23.910 } 00:19:23.910 } 00:19:23.910 ]' 00:19:23.910 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.168 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.168 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.168 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:24.168 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.168 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.168 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.168 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.426 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:19:24.426 18:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:19:25.359 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.359 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.359 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.359 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.359 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.359 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.359 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.359 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:25.359 18:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.617 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.183 00:19:26.183 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.183 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.183 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.183 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.183 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.183 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.183 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.183 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.183 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.183 { 00:19:26.183 "cntlid": 9, 00:19:26.183 "qid": 0, 00:19:26.183 "state": "enabled", 00:19:26.183 "thread": "nvmf_tgt_poll_group_000", 00:19:26.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:26.183 "listen_address": { 00:19:26.183 "trtype": "TCP", 00:19:26.183 "adrfam": "IPv4", 00:19:26.183 "traddr": "10.0.0.2", 00:19:26.183 "trsvcid": "4420" 00:19:26.183 }, 00:19:26.183 "peer_address": { 00:19:26.183 "trtype": "TCP", 00:19:26.183 "adrfam": "IPv4", 00:19:26.183 "traddr": "10.0.0.1", 00:19:26.183 "trsvcid": "42206" 00:19:26.183 }, 00:19:26.183 "auth": { 00:19:26.183 "state": "completed", 00:19:26.183 "digest": "sha256", 00:19:26.183 "dhgroup": "ffdhe2048" 00:19:26.183 } 00:19:26.183 } 00:19:26.183 ]' 00:19:26.183 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.440 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.440 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.440 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.440 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.440 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.440 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.440 18:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.698 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:19:26.698 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:19:27.630 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.630 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.630 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.630 18:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.630 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.630 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.630 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.630 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.888 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:27.888 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.888 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.888 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:27.888 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:27.888 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.888 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.888 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.888 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.888 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.888 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.889 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.889 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.146 00:19:28.146 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.146 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.146 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.404 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.404 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.404 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.404 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.404 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.404 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.404 { 00:19:28.404 "cntlid": 11, 00:19:28.404 "qid": 0, 00:19:28.404 "state": "enabled", 00:19:28.404 "thread": "nvmf_tgt_poll_group_000", 00:19:28.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:28.404 "listen_address": { 00:19:28.404 "trtype": "TCP", 00:19:28.404 "adrfam": "IPv4", 00:19:28.404 "traddr": "10.0.0.2", 00:19:28.404 "trsvcid": "4420" 00:19:28.404 }, 00:19:28.404 "peer_address": { 00:19:28.404 "trtype": "TCP", 00:19:28.404 "adrfam": "IPv4", 00:19:28.404 "traddr": "10.0.0.1", 00:19:28.404 "trsvcid": "42228" 00:19:28.404 }, 00:19:28.404 "auth": { 00:19:28.404 "state": "completed", 00:19:28.404 "digest": "sha256", 00:19:28.404 "dhgroup": "ffdhe2048" 00:19:28.404 } 00:19:28.404 } 00:19:28.404 ]' 00:19:28.404 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.404 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.404 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.662 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.662 18:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.662 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.662 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.662 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.920 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:19:28.920 18:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:19:29.853 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.853 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.853 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.853 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.853 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.853 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.853 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.853 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.111 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.369 00:19:30.369 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.369 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.369 18:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.627 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.627 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.627 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.627 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.627 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.627 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.627 { 00:19:30.627 "cntlid": 13, 00:19:30.627 "qid": 0, 00:19:30.627 "state": "enabled", 00:19:30.627 "thread": "nvmf_tgt_poll_group_000", 00:19:30.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.627 "listen_address": { 00:19:30.627 "trtype": "TCP", 00:19:30.627 "adrfam": "IPv4", 00:19:30.627 "traddr": "10.0.0.2", 00:19:30.627 "trsvcid": "4420" 00:19:30.627 }, 00:19:30.627 "peer_address": { 00:19:30.627 "trtype": "TCP", 00:19:30.627 "adrfam": "IPv4", 00:19:30.627 "traddr": "10.0.0.1", 00:19:30.627 "trsvcid": "42270" 00:19:30.627 }, 00:19:30.627 "auth": { 00:19:30.627 "state": "completed", 00:19:30.627 "digest": "sha256", 00:19:30.627 "dhgroup": "ffdhe2048" 00:19:30.627 } 00:19:30.627 } 00:19:30.627 ]' 00:19:30.627 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.627 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.627 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.885 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.885 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.885 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.885 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.885 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.143 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:19:31.143 18:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:19:32.077 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.077 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.077 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.077 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.077 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.077 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.077 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.077 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.335 18:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.594 00:19:32.594 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.594 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.595 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.853 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.853 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.853 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.853 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.853 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.853 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.853 { 00:19:32.853 "cntlid": 15, 00:19:32.853 "qid": 0, 00:19:32.853 "state": "enabled", 00:19:32.853 "thread": "nvmf_tgt_poll_group_000", 00:19:32.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:32.853 "listen_address": { 00:19:32.853 "trtype": "TCP", 00:19:32.853 "adrfam": "IPv4", 00:19:32.853 "traddr": "10.0.0.2", 00:19:32.853 "trsvcid": "4420" 00:19:32.853 }, 00:19:32.853 "peer_address": { 00:19:32.853 "trtype": "TCP", 00:19:32.853 "adrfam": "IPv4", 00:19:32.853 "traddr": "10.0.0.1", 00:19:32.853 "trsvcid": "55552" 00:19:32.853 }, 00:19:32.853 "auth": { 00:19:32.853 "state": "completed", 00:19:32.853 "digest": "sha256", 00:19:32.853 "dhgroup": "ffdhe2048" 00:19:32.853 } 00:19:32.853 } 00:19:32.853 ]' 00:19:32.853 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.853 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.853 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.853 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.853 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.110 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.110 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.110 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.368 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:19:33.369 18:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:19:34.301 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.301 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.301 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.301 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.301 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.301 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.301 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.301 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.301 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.558 18:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.815 00:19:34.815 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.816 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.816 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.073 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.073 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.073 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.073 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.073 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.073 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.073 { 00:19:35.073 "cntlid": 17, 00:19:35.073 "qid": 0, 00:19:35.073 "state": "enabled", 00:19:35.073 "thread": "nvmf_tgt_poll_group_000", 00:19:35.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.073 "listen_address": { 00:19:35.073 "trtype": "TCP", 00:19:35.073 "adrfam": "IPv4", 00:19:35.073 "traddr": "10.0.0.2", 00:19:35.073 "trsvcid": "4420" 00:19:35.073 }, 00:19:35.073 "peer_address": { 00:19:35.073 "trtype": "TCP", 00:19:35.073 "adrfam": "IPv4", 00:19:35.073 "traddr": "10.0.0.1", 00:19:35.073 "trsvcid": "55576" 00:19:35.074 }, 00:19:35.074 "auth": { 00:19:35.074 "state": "completed", 00:19:35.074 "digest": "sha256", 00:19:35.074 "dhgroup": "ffdhe3072" 00:19:35.074 } 00:19:35.074 } 00:19:35.074 ]' 00:19:35.074 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.074 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.074 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.332 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:35.332 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.332 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.332 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.332 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.590 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:19:35.590 18:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:19:36.591 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.591 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.591 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.591 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.591 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.591 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.591 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:36.591 18:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.850 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.109 00:19:37.109 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.109 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.109 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.367 { 00:19:37.367 "cntlid": 19, 00:19:37.367 "qid": 0, 00:19:37.367 "state": "enabled", 00:19:37.367 "thread": "nvmf_tgt_poll_group_000", 00:19:37.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:37.367 "listen_address": { 00:19:37.367 "trtype": "TCP", 00:19:37.367 "adrfam": "IPv4", 00:19:37.367 "traddr": "10.0.0.2", 00:19:37.367 "trsvcid": "4420" 00:19:37.367 }, 00:19:37.367 "peer_address": { 00:19:37.367 "trtype": "TCP", 00:19:37.367 "adrfam": "IPv4", 00:19:37.367 "traddr": "10.0.0.1", 00:19:37.367 "trsvcid": "55588" 00:19:37.367 }, 00:19:37.367 "auth": { 00:19:37.367 "state": "completed", 00:19:37.367 "digest": "sha256", 00:19:37.367 "dhgroup": "ffdhe3072" 00:19:37.367 } 00:19:37.367 } 00:19:37.367 ]' 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.367 18:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.933 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:19:37.933 18:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:19:38.498 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.498 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.498 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.498 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.756 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.756 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.756 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.756 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.014 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.272 00:19:39.272 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.272 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.272 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.530 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.530 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.530 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.530 18:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.530 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.530 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.530 { 00:19:39.530 "cntlid": 21, 00:19:39.530 "qid": 0, 00:19:39.530 "state": "enabled", 00:19:39.530 "thread": "nvmf_tgt_poll_group_000", 00:19:39.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:39.530 "listen_address": { 00:19:39.530 "trtype": "TCP", 00:19:39.530 "adrfam": "IPv4", 00:19:39.530 "traddr": "10.0.0.2", 00:19:39.530 "trsvcid": "4420" 00:19:39.530 }, 00:19:39.530 "peer_address": { 00:19:39.530 "trtype": "TCP", 00:19:39.530 "adrfam": "IPv4", 00:19:39.530 "traddr": "10.0.0.1", 00:19:39.530 "trsvcid": "55610" 00:19:39.530 }, 00:19:39.530 "auth": { 00:19:39.530 "state": "completed", 00:19:39.530 "digest": "sha256", 00:19:39.530 "dhgroup": "ffdhe3072" 00:19:39.530 } 00:19:39.530 } 00:19:39.530 ]' 00:19:39.530 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.530 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.530 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.530 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.530 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.788 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.788 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.788 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.046 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:19:40.046 18:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:19:40.979 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.979 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.979 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.979 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.979 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.979 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.979 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.979 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.238 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.496 00:19:41.496 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.496 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.497 18:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.755 { 00:19:41.755 "cntlid": 23, 00:19:41.755 "qid": 0, 00:19:41.755 "state": "enabled", 00:19:41.755 "thread": "nvmf_tgt_poll_group_000", 00:19:41.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:41.755 "listen_address": { 00:19:41.755 "trtype": "TCP", 00:19:41.755 "adrfam": "IPv4", 00:19:41.755 "traddr": "10.0.0.2", 00:19:41.755 "trsvcid": "4420" 00:19:41.755 }, 00:19:41.755 "peer_address": { 00:19:41.755 "trtype": "TCP", 00:19:41.755 "adrfam": "IPv4", 00:19:41.755 "traddr": "10.0.0.1", 00:19:41.755 "trsvcid": "58264" 00:19:41.755 }, 00:19:41.755 "auth": { 00:19:41.755 "state": "completed", 00:19:41.755 "digest": "sha256", 00:19:41.755 "dhgroup": "ffdhe3072" 00:19:41.755 } 00:19:41.755 } 00:19:41.755 ]' 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.755 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.321 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:19:42.321 18:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.254 18:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.821 00:19:43.821 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.821 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.821 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.079 { 00:19:44.079 "cntlid": 25, 00:19:44.079 "qid": 0, 00:19:44.079 "state": "enabled", 00:19:44.079 "thread": "nvmf_tgt_poll_group_000", 00:19:44.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.079 "listen_address": { 00:19:44.079 "trtype": "TCP", 00:19:44.079 "adrfam": "IPv4", 00:19:44.079 "traddr": "10.0.0.2", 00:19:44.079 "trsvcid": "4420" 00:19:44.079 }, 00:19:44.079 "peer_address": { 00:19:44.079 "trtype": "TCP", 00:19:44.079 "adrfam": "IPv4", 00:19:44.079 "traddr": "10.0.0.1", 00:19:44.079 "trsvcid": "58296" 00:19:44.079 }, 00:19:44.079 "auth": { 00:19:44.079 "state": "completed", 00:19:44.079 "digest": "sha256", 00:19:44.079 "dhgroup": "ffdhe4096" 00:19:44.079 } 00:19:44.079 } 00:19:44.079 ]' 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.079 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.337 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:19:44.337 18:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:19:45.270 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.270 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.270 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.270 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.270 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.270 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.270 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.270 18:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.528 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.093 00:19:46.093 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.093 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.093 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.351 { 00:19:46.351 "cntlid": 27, 00:19:46.351 "qid": 0, 00:19:46.351 "state": "enabled", 00:19:46.351 "thread": "nvmf_tgt_poll_group_000", 00:19:46.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:46.351 "listen_address": { 00:19:46.351 "trtype": "TCP", 00:19:46.351 "adrfam": "IPv4", 00:19:46.351 "traddr": "10.0.0.2", 00:19:46.351 "trsvcid": "4420" 00:19:46.351 }, 00:19:46.351 "peer_address": { 00:19:46.351 "trtype": "TCP", 00:19:46.351 "adrfam": "IPv4", 00:19:46.351 "traddr": "10.0.0.1", 00:19:46.351 "trsvcid": "58340" 00:19:46.351 }, 00:19:46.351 "auth": { 00:19:46.351 "state": "completed", 00:19:46.351 "digest": "sha256", 00:19:46.351 "dhgroup": "ffdhe4096" 00:19:46.351 } 00:19:46.351 } 00:19:46.351 ]' 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.351 18:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.609 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:19:46.609 18:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:19:47.542 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.542 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.542 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.542 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.542 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.542 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.542 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:47.542 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.801 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.366 00:19:48.366 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.366 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.366 18:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.623 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.623 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.624 { 00:19:48.624 "cntlid": 29, 00:19:48.624 "qid": 0, 00:19:48.624 "state": "enabled", 00:19:48.624 "thread": "nvmf_tgt_poll_group_000", 00:19:48.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.624 "listen_address": { 00:19:48.624 "trtype": "TCP", 00:19:48.624 "adrfam": "IPv4", 00:19:48.624 "traddr": "10.0.0.2", 00:19:48.624 "trsvcid": "4420" 00:19:48.624 }, 00:19:48.624 "peer_address": { 00:19:48.624 "trtype": "TCP", 00:19:48.624 "adrfam": "IPv4", 00:19:48.624 "traddr": "10.0.0.1", 00:19:48.624 "trsvcid": "58374" 00:19:48.624 }, 00:19:48.624 "auth": { 00:19:48.624 "state": "completed", 00:19:48.624 "digest": "sha256", 00:19:48.624 "dhgroup": "ffdhe4096" 00:19:48.624 } 00:19:48.624 } 00:19:48.624 ]' 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.624 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.881 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:19:48.881 18:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:19:49.815 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.815 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.815 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.815 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.815 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.815 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.815 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.815 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.073 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.639 00:19:50.639 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.639 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.640 18:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.640 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.897 { 00:19:50.897 "cntlid": 31, 00:19:50.897 "qid": 0, 00:19:50.897 "state": "enabled", 00:19:50.897 "thread": "nvmf_tgt_poll_group_000", 00:19:50.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:50.897 "listen_address": { 00:19:50.897 "trtype": "TCP", 00:19:50.897 "adrfam": "IPv4", 00:19:50.897 "traddr": "10.0.0.2", 00:19:50.897 "trsvcid": "4420" 00:19:50.897 }, 00:19:50.897 "peer_address": { 00:19:50.897 "trtype": "TCP", 00:19:50.897 "adrfam": "IPv4", 00:19:50.897 "traddr": "10.0.0.1", 00:19:50.897 "trsvcid": "58394" 00:19:50.897 }, 00:19:50.897 "auth": { 00:19:50.897 "state": "completed", 00:19:50.897 "digest": "sha256", 00:19:50.897 "dhgroup": "ffdhe4096" 00:19:50.897 } 00:19:50.897 } 00:19:50.897 ]' 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.897 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.898 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.155 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:19:51.155 18:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:19:52.087 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.087 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.088 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.088 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.088 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.088 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.088 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.088 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:52.088 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.345 18:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.909 00:19:52.909 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.909 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.909 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.167 { 00:19:53.167 "cntlid": 33, 00:19:53.167 "qid": 0, 00:19:53.167 "state": "enabled", 00:19:53.167 "thread": "nvmf_tgt_poll_group_000", 00:19:53.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.167 "listen_address": { 00:19:53.167 "trtype": "TCP", 00:19:53.167 "adrfam": "IPv4", 00:19:53.167 "traddr": "10.0.0.2", 00:19:53.167 "trsvcid": "4420" 00:19:53.167 }, 00:19:53.167 "peer_address": { 00:19:53.167 "trtype": "TCP", 00:19:53.167 "adrfam": "IPv4", 00:19:53.167 "traddr": "10.0.0.1", 00:19:53.167 "trsvcid": "40024" 00:19:53.167 }, 00:19:53.167 "auth": { 00:19:53.167 "state": "completed", 00:19:53.167 "digest": "sha256", 00:19:53.167 "dhgroup": "ffdhe6144" 00:19:53.167 } 00:19:53.167 } 00:19:53.167 ]' 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.167 18:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.733 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:19:53.733 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:19:54.298 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.556 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.556 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.556 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.556 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.556 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.556 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.556 18:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.814 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.380 00:19:55.380 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.380 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.380 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.638 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.638 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.638 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.638 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.638 18:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.638 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.638 { 00:19:55.638 "cntlid": 35, 00:19:55.638 "qid": 0, 00:19:55.638 "state": "enabled", 00:19:55.638 "thread": "nvmf_tgt_poll_group_000", 00:19:55.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:55.638 "listen_address": { 00:19:55.638 "trtype": "TCP", 00:19:55.638 "adrfam": "IPv4", 00:19:55.638 "traddr": "10.0.0.2", 00:19:55.639 "trsvcid": "4420" 00:19:55.639 }, 00:19:55.639 "peer_address": { 00:19:55.639 "trtype": "TCP", 00:19:55.639 "adrfam": "IPv4", 00:19:55.639 "traddr": "10.0.0.1", 00:19:55.639 "trsvcid": "40058" 00:19:55.639 }, 00:19:55.639 "auth": { 00:19:55.639 "state": "completed", 00:19:55.639 "digest": "sha256", 00:19:55.639 "dhgroup": "ffdhe6144" 00:19:55.639 } 00:19:55.639 } 00:19:55.639 ]' 00:19:55.639 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.639 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.639 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.639 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.639 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.639 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.639 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.639 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.896 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:19:55.896 18:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:19:56.829 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.829 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.829 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.829 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.829 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.829 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.829 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.829 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.087 18:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.653 00:19:57.653 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.653 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.653 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.911 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.911 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.911 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.911 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.911 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.911 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.911 { 00:19:57.911 "cntlid": 37, 00:19:57.911 "qid": 0, 00:19:57.911 "state": "enabled", 00:19:57.911 "thread": "nvmf_tgt_poll_group_000", 00:19:57.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:57.911 "listen_address": { 00:19:57.911 "trtype": "TCP", 00:19:57.911 "adrfam": "IPv4", 00:19:57.911 "traddr": "10.0.0.2", 00:19:57.911 "trsvcid": "4420" 00:19:57.911 }, 00:19:57.911 "peer_address": { 00:19:57.911 "trtype": "TCP", 00:19:57.911 "adrfam": "IPv4", 00:19:57.911 "traddr": "10.0.0.1", 00:19:57.911 "trsvcid": "40094" 00:19:57.911 }, 00:19:57.911 "auth": { 00:19:57.911 "state": "completed", 00:19:57.911 "digest": "sha256", 00:19:57.911 "dhgroup": "ffdhe6144" 00:19:57.911 } 00:19:57.911 } 00:19:57.911 ]' 00:19:57.911 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.911 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.911 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.169 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.169 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.169 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.169 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.169 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.427 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:19:58.427 18:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:19:59.361 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.361 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.361 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.361 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.361 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.361 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.361 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.361 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.619 18:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.185 00:20:00.185 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.185 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.185 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.443 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.443 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.443 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.443 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.443 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.443 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.443 { 00:20:00.443 "cntlid": 39, 00:20:00.443 "qid": 0, 00:20:00.443 "state": "enabled", 00:20:00.443 "thread": "nvmf_tgt_poll_group_000", 00:20:00.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.443 "listen_address": { 00:20:00.443 "trtype": "TCP", 00:20:00.443 "adrfam": "IPv4", 00:20:00.443 "traddr": "10.0.0.2", 00:20:00.443 "trsvcid": "4420" 00:20:00.443 }, 00:20:00.443 "peer_address": { 00:20:00.443 "trtype": "TCP", 00:20:00.443 "adrfam": "IPv4", 00:20:00.443 "traddr": "10.0.0.1", 00:20:00.443 "trsvcid": "40118" 00:20:00.443 }, 00:20:00.443 "auth": { 00:20:00.443 "state": "completed", 00:20:00.443 "digest": "sha256", 00:20:00.443 "dhgroup": "ffdhe6144" 00:20:00.443 } 00:20:00.443 } 00:20:00.443 ]' 00:20:00.443 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.443 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.444 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.444 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.444 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.444 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.444 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.444 18:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.701 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:00.701 18:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:01.636 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.636 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.636 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.636 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.636 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.637 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.637 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.637 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.637 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.931 18:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.892 00:20:02.892 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.892 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.892 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.892 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.892 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.892 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.892 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.892 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.892 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.892 { 00:20:02.892 "cntlid": 41, 00:20:02.892 "qid": 0, 00:20:02.892 "state": "enabled", 00:20:02.892 "thread": "nvmf_tgt_poll_group_000", 00:20:02.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:02.892 "listen_address": { 00:20:02.892 "trtype": "TCP", 00:20:02.892 "adrfam": "IPv4", 00:20:02.892 "traddr": "10.0.0.2", 00:20:02.892 "trsvcid": "4420" 00:20:02.892 }, 00:20:02.892 "peer_address": { 00:20:02.892 "trtype": "TCP", 00:20:02.892 "adrfam": "IPv4", 00:20:02.892 "traddr": "10.0.0.1", 00:20:02.892 "trsvcid": "50546" 00:20:02.892 }, 00:20:02.892 "auth": { 00:20:02.892 "state": "completed", 00:20:02.892 "digest": "sha256", 00:20:02.892 "dhgroup": "ffdhe8192" 00:20:02.892 } 00:20:02.892 } 00:20:02.892 ]' 00:20:02.892 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.150 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.150 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.150 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.150 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.150 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.150 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.150 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.408 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:03.408 18:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:04.342 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.342 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.342 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.342 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.342 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.342 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.342 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.342 18:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.600 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.533 00:20:05.533 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.533 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.533 18:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.792 { 00:20:05.792 "cntlid": 43, 00:20:05.792 "qid": 0, 00:20:05.792 "state": "enabled", 00:20:05.792 "thread": "nvmf_tgt_poll_group_000", 00:20:05.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.792 "listen_address": { 00:20:05.792 "trtype": "TCP", 00:20:05.792 "adrfam": "IPv4", 00:20:05.792 "traddr": "10.0.0.2", 00:20:05.792 "trsvcid": "4420" 00:20:05.792 }, 00:20:05.792 "peer_address": { 00:20:05.792 "trtype": "TCP", 00:20:05.792 "adrfam": "IPv4", 00:20:05.792 "traddr": "10.0.0.1", 00:20:05.792 "trsvcid": "50556" 00:20:05.792 }, 00:20:05.792 "auth": { 00:20:05.792 "state": "completed", 00:20:05.792 "digest": "sha256", 00:20:05.792 "dhgroup": "ffdhe8192" 00:20:05.792 } 00:20:05.792 } 00:20:05.792 ]' 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.792 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.050 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:06.050 18:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:06.981 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.981 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.981 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.981 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.981 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.981 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.981 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.981 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.239 18:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.173 00:20:08.173 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.173 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.173 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.432 { 00:20:08.432 "cntlid": 45, 00:20:08.432 "qid": 0, 00:20:08.432 "state": "enabled", 00:20:08.432 "thread": "nvmf_tgt_poll_group_000", 00:20:08.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.432 "listen_address": { 00:20:08.432 "trtype": "TCP", 00:20:08.432 "adrfam": "IPv4", 00:20:08.432 "traddr": "10.0.0.2", 00:20:08.432 "trsvcid": "4420" 00:20:08.432 }, 00:20:08.432 "peer_address": { 00:20:08.432 "trtype": "TCP", 00:20:08.432 "adrfam": "IPv4", 00:20:08.432 "traddr": "10.0.0.1", 00:20:08.432 "trsvcid": "50574" 00:20:08.432 }, 00:20:08.432 "auth": { 00:20:08.432 "state": "completed", 00:20:08.432 "digest": "sha256", 00:20:08.432 "dhgroup": "ffdhe8192" 00:20:08.432 } 00:20:08.432 } 00:20:08.432 ]' 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.432 18:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.691 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:08.691 18:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:09.625 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.625 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.625 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.625 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.625 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.625 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.625 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.625 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.191 18:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.757 00:20:10.757 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.757 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.757 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.016 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.016 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.016 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.016 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.016 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.016 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.016 { 00:20:11.016 "cntlid": 47, 00:20:11.016 "qid": 0, 00:20:11.016 "state": "enabled", 00:20:11.016 "thread": "nvmf_tgt_poll_group_000", 00:20:11.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:11.016 "listen_address": { 00:20:11.016 "trtype": "TCP", 00:20:11.016 "adrfam": "IPv4", 00:20:11.016 "traddr": "10.0.0.2", 00:20:11.016 "trsvcid": "4420" 00:20:11.016 }, 00:20:11.016 "peer_address": { 00:20:11.016 "trtype": "TCP", 00:20:11.016 "adrfam": "IPv4", 00:20:11.016 "traddr": "10.0.0.1", 00:20:11.016 "trsvcid": "50598" 00:20:11.016 }, 00:20:11.016 "auth": { 00:20:11.016 "state": "completed", 00:20:11.016 "digest": "sha256", 00:20:11.016 "dhgroup": "ffdhe8192" 00:20:11.016 } 00:20:11.016 } 00:20:11.016 ]' 00:20:11.016 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.274 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.274 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.274 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.274 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.274 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.274 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.274 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.532 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:11.532 18:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:12.464 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.464 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.464 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.464 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.464 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.464 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:12.464 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.464 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.464 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.464 18:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.721 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.979 00:20:12.979 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.979 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.979 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.237 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.237 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.237 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.237 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.237 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.237 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.237 { 00:20:13.237 "cntlid": 49, 00:20:13.237 "qid": 0, 00:20:13.237 "state": "enabled", 00:20:13.237 "thread": "nvmf_tgt_poll_group_000", 00:20:13.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.237 "listen_address": { 00:20:13.237 "trtype": "TCP", 00:20:13.237 "adrfam": "IPv4", 00:20:13.237 "traddr": "10.0.0.2", 00:20:13.237 "trsvcid": "4420" 00:20:13.237 }, 00:20:13.237 "peer_address": { 00:20:13.237 "trtype": "TCP", 00:20:13.237 "adrfam": "IPv4", 00:20:13.237 "traddr": "10.0.0.1", 00:20:13.237 "trsvcid": "37982" 00:20:13.237 }, 00:20:13.237 "auth": { 00:20:13.237 "state": "completed", 00:20:13.237 "digest": "sha384", 00:20:13.237 "dhgroup": "null" 00:20:13.237 } 00:20:13.237 } 00:20:13.237 ]' 00:20:13.237 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.237 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.237 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.495 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:13.495 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.495 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.495 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.495 18:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.753 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:13.753 18:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:14.686 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.686 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.686 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.686 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.686 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.686 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.686 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.686 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.944 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.203 00:20:15.203 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.203 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.203 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.461 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.461 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.461 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.461 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.461 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.461 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.461 { 00:20:15.461 "cntlid": 51, 00:20:15.461 "qid": 0, 00:20:15.461 "state": "enabled", 00:20:15.461 "thread": "nvmf_tgt_poll_group_000", 00:20:15.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.461 "listen_address": { 00:20:15.461 "trtype": "TCP", 00:20:15.461 "adrfam": "IPv4", 00:20:15.461 "traddr": "10.0.0.2", 00:20:15.461 "trsvcid": "4420" 00:20:15.461 }, 00:20:15.461 "peer_address": { 00:20:15.461 "trtype": "TCP", 00:20:15.461 "adrfam": "IPv4", 00:20:15.461 "traddr": "10.0.0.1", 00:20:15.461 "trsvcid": "38010" 00:20:15.461 }, 00:20:15.461 "auth": { 00:20:15.461 "state": "completed", 00:20:15.461 "digest": "sha384", 00:20:15.461 "dhgroup": "null" 00:20:15.461 } 00:20:15.461 } 00:20:15.461 ]' 00:20:15.461 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.461 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.461 18:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.461 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:15.461 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.719 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.719 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.719 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.976 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:15.977 18:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.911 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.477 00:20:17.477 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.477 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.477 18:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.736 { 00:20:17.736 "cntlid": 53, 00:20:17.736 "qid": 0, 00:20:17.736 "state": "enabled", 00:20:17.736 "thread": "nvmf_tgt_poll_group_000", 00:20:17.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.736 "listen_address": { 00:20:17.736 "trtype": "TCP", 00:20:17.736 "adrfam": "IPv4", 00:20:17.736 "traddr": "10.0.0.2", 00:20:17.736 "trsvcid": "4420" 00:20:17.736 }, 00:20:17.736 "peer_address": { 00:20:17.736 "trtype": "TCP", 00:20:17.736 "adrfam": "IPv4", 00:20:17.736 "traddr": "10.0.0.1", 00:20:17.736 "trsvcid": "38022" 00:20:17.736 }, 00:20:17.736 "auth": { 00:20:17.736 "state": "completed", 00:20:17.736 "digest": "sha384", 00:20:17.736 "dhgroup": "null" 00:20:17.736 } 00:20:17.736 } 00:20:17.736 ]' 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.736 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.993 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:17.994 18:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:18.927 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.927 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.927 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.927 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.927 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.927 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.927 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.927 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.183 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:19.183 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.183 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.183 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:19.183 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:19.183 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.183 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:19.183 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.183 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.184 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.184 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:19.184 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.184 18:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.748 00:20:19.748 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.748 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.748 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.006 { 00:20:20.006 "cntlid": 55, 00:20:20.006 "qid": 0, 00:20:20.006 "state": "enabled", 00:20:20.006 "thread": "nvmf_tgt_poll_group_000", 00:20:20.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:20.006 "listen_address": { 00:20:20.006 "trtype": "TCP", 00:20:20.006 "adrfam": "IPv4", 00:20:20.006 "traddr": "10.0.0.2", 00:20:20.006 "trsvcid": "4420" 00:20:20.006 }, 00:20:20.006 "peer_address": { 00:20:20.006 "trtype": "TCP", 00:20:20.006 "adrfam": "IPv4", 00:20:20.006 "traddr": "10.0.0.1", 00:20:20.006 "trsvcid": "38050" 00:20:20.006 }, 00:20:20.006 "auth": { 00:20:20.006 "state": "completed", 00:20:20.006 "digest": "sha384", 00:20:20.006 "dhgroup": "null" 00:20:20.006 } 00:20:20.006 } 00:20:20.006 ]' 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.006 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.265 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:20.265 18:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:21.198 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.198 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.198 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.199 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.199 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.199 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.199 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.199 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.199 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.457 18:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.024 00:20:22.024 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.024 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.024 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.283 { 00:20:22.283 "cntlid": 57, 00:20:22.283 "qid": 0, 00:20:22.283 "state": "enabled", 00:20:22.283 "thread": "nvmf_tgt_poll_group_000", 00:20:22.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.283 "listen_address": { 00:20:22.283 "trtype": "TCP", 00:20:22.283 "adrfam": "IPv4", 00:20:22.283 "traddr": "10.0.0.2", 00:20:22.283 "trsvcid": "4420" 00:20:22.283 }, 00:20:22.283 "peer_address": { 00:20:22.283 "trtype": "TCP", 00:20:22.283 "adrfam": "IPv4", 00:20:22.283 "traddr": "10.0.0.1", 00:20:22.283 "trsvcid": "46426" 00:20:22.283 }, 00:20:22.283 "auth": { 00:20:22.283 "state": "completed", 00:20:22.283 "digest": "sha384", 00:20:22.283 "dhgroup": "ffdhe2048" 00:20:22.283 } 00:20:22.283 } 00:20:22.283 ]' 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.283 18:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.542 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:22.542 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:23.484 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.484 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.484 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.484 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.484 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.484 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.484 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.484 18:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.742 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.307 00:20:24.307 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.307 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.307 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.566 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.566 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.566 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.566 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.566 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.566 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.566 { 00:20:24.566 "cntlid": 59, 00:20:24.566 "qid": 0, 00:20:24.566 "state": "enabled", 00:20:24.566 "thread": "nvmf_tgt_poll_group_000", 00:20:24.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.566 "listen_address": { 00:20:24.566 "trtype": "TCP", 00:20:24.566 "adrfam": "IPv4", 00:20:24.566 "traddr": "10.0.0.2", 00:20:24.566 "trsvcid": "4420" 00:20:24.566 }, 00:20:24.566 "peer_address": { 00:20:24.566 "trtype": "TCP", 00:20:24.566 "adrfam": "IPv4", 00:20:24.566 "traddr": "10.0.0.1", 00:20:24.566 "trsvcid": "46444" 00:20:24.566 }, 00:20:24.566 "auth": { 00:20:24.566 "state": "completed", 00:20:24.566 "digest": "sha384", 00:20:24.566 "dhgroup": "ffdhe2048" 00:20:24.566 } 00:20:24.566 } 00:20:24.566 ]' 00:20:24.566 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.566 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.566 18:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.566 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.566 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.566 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.566 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.566 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.824 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:24.824 18:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:25.758 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.758 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.758 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.758 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.758 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.758 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.758 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.758 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.016 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.582 00:20:26.582 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.582 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.582 18:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.895 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.895 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.895 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.895 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.896 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.896 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.896 { 00:20:26.896 "cntlid": 61, 00:20:26.896 "qid": 0, 00:20:26.896 "state": "enabled", 00:20:26.896 "thread": "nvmf_tgt_poll_group_000", 00:20:26.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:26.896 "listen_address": { 00:20:26.896 "trtype": "TCP", 00:20:26.896 "adrfam": "IPv4", 00:20:26.896 "traddr": "10.0.0.2", 00:20:26.896 "trsvcid": "4420" 00:20:26.896 }, 00:20:26.896 "peer_address": { 00:20:26.896 "trtype": "TCP", 00:20:26.896 "adrfam": "IPv4", 00:20:26.896 "traddr": "10.0.0.1", 00:20:26.896 "trsvcid": "46480" 00:20:26.896 }, 00:20:26.896 "auth": { 00:20:26.896 "state": "completed", 00:20:26.896 "digest": "sha384", 00:20:26.896 "dhgroup": "ffdhe2048" 00:20:26.896 } 00:20:26.896 } 00:20:26.896 ]' 00:20:26.896 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.896 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.896 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.896 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.896 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.896 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.896 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.896 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.178 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:27.178 18:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:28.111 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.111 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.111 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.111 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.111 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.111 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.111 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.111 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.369 18:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.627 00:20:28.627 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.627 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.627 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.886 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.886 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.886 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.886 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.886 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.886 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.886 { 00:20:28.886 "cntlid": 63, 00:20:28.886 "qid": 0, 00:20:28.886 "state": "enabled", 00:20:28.886 "thread": "nvmf_tgt_poll_group_000", 00:20:28.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.886 "listen_address": { 00:20:28.886 "trtype": "TCP", 00:20:28.886 "adrfam": "IPv4", 00:20:28.886 "traddr": "10.0.0.2", 00:20:28.886 "trsvcid": "4420" 00:20:28.886 }, 00:20:28.886 "peer_address": { 00:20:28.886 "trtype": "TCP", 00:20:28.886 "adrfam": "IPv4", 00:20:28.886 "traddr": "10.0.0.1", 00:20:28.886 "trsvcid": "46504" 00:20:28.886 }, 00:20:28.886 "auth": { 00:20:28.886 "state": "completed", 00:20:28.886 "digest": "sha384", 00:20:28.886 "dhgroup": "ffdhe2048" 00:20:28.886 } 00:20:28.886 } 00:20:28.886 ]' 00:20:28.886 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.886 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.886 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.886 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.886 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.144 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.144 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.144 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.401 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:29.401 18:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:30.332 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.332 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.332 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.332 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.332 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.332 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.332 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.332 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.332 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.590 18:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.847 00:20:30.847 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.847 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.847 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.105 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.105 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.105 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.105 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.105 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.105 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.105 { 00:20:31.105 "cntlid": 65, 00:20:31.105 "qid": 0, 00:20:31.105 "state": "enabled", 00:20:31.105 "thread": "nvmf_tgt_poll_group_000", 00:20:31.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:31.105 "listen_address": { 00:20:31.105 "trtype": "TCP", 00:20:31.105 "adrfam": "IPv4", 00:20:31.105 "traddr": "10.0.0.2", 00:20:31.105 "trsvcid": "4420" 00:20:31.105 }, 00:20:31.105 "peer_address": { 00:20:31.105 "trtype": "TCP", 00:20:31.105 "adrfam": "IPv4", 00:20:31.105 "traddr": "10.0.0.1", 00:20:31.105 "trsvcid": "46530" 00:20:31.105 }, 00:20:31.105 "auth": { 00:20:31.105 "state": "completed", 00:20:31.105 "digest": "sha384", 00:20:31.105 "dhgroup": "ffdhe3072" 00:20:31.105 } 00:20:31.105 } 00:20:31.105 ]' 00:20:31.105 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.105 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.105 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.105 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.105 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.363 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.363 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.363 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.620 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:31.620 18:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:32.553 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.553 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.553 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.553 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.553 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.553 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.554 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.554 18:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.810 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.811 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.067 00:20:33.067 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.067 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.068 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.325 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.325 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.325 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.325 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.326 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.326 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.326 { 00:20:33.326 "cntlid": 67, 00:20:33.326 "qid": 0, 00:20:33.326 "state": "enabled", 00:20:33.326 "thread": "nvmf_tgt_poll_group_000", 00:20:33.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:33.326 "listen_address": { 00:20:33.326 "trtype": "TCP", 00:20:33.326 "adrfam": "IPv4", 00:20:33.326 "traddr": "10.0.0.2", 00:20:33.326 "trsvcid": "4420" 00:20:33.326 }, 00:20:33.326 "peer_address": { 00:20:33.326 "trtype": "TCP", 00:20:33.326 "adrfam": "IPv4", 00:20:33.326 "traddr": "10.0.0.1", 00:20:33.326 "trsvcid": "60546" 00:20:33.326 }, 00:20:33.326 "auth": { 00:20:33.326 "state": "completed", 00:20:33.326 "digest": "sha384", 00:20:33.326 "dhgroup": "ffdhe3072" 00:20:33.326 } 00:20:33.326 } 00:20:33.326 ]' 00:20:33.326 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.326 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.326 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.326 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:33.326 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.583 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.583 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.583 18:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.841 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:33.841 18:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.775 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:35.033 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:35.033 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.033 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.033 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.033 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.033 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.033 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.033 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.033 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.291 00:20:35.291 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.291 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.291 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.550 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.550 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.550 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.550 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.550 18:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.550 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.550 { 00:20:35.550 "cntlid": 69, 00:20:35.550 "qid": 0, 00:20:35.550 "state": "enabled", 00:20:35.550 "thread": "nvmf_tgt_poll_group_000", 00:20:35.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:35.550 "listen_address": { 00:20:35.550 "trtype": "TCP", 00:20:35.550 "adrfam": "IPv4", 00:20:35.550 "traddr": "10.0.0.2", 00:20:35.550 "trsvcid": "4420" 00:20:35.550 }, 00:20:35.550 "peer_address": { 00:20:35.550 "trtype": "TCP", 00:20:35.550 "adrfam": "IPv4", 00:20:35.550 "traddr": "10.0.0.1", 00:20:35.550 "trsvcid": "60568" 00:20:35.550 }, 00:20:35.550 "auth": { 00:20:35.550 "state": "completed", 00:20:35.550 "digest": "sha384", 00:20:35.550 "dhgroup": "ffdhe3072" 00:20:35.550 } 00:20:35.550 } 00:20:35.550 ]' 00:20:35.550 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.550 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.550 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.550 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.550 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.550 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.550 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.550 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.116 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:36.116 18:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.050 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.616 00:20:37.616 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.616 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.616 18:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.874 { 00:20:37.874 "cntlid": 71, 00:20:37.874 "qid": 0, 00:20:37.874 "state": "enabled", 00:20:37.874 "thread": "nvmf_tgt_poll_group_000", 00:20:37.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.874 "listen_address": { 00:20:37.874 "trtype": "TCP", 00:20:37.874 "adrfam": "IPv4", 00:20:37.874 "traddr": "10.0.0.2", 00:20:37.874 "trsvcid": "4420" 00:20:37.874 }, 00:20:37.874 "peer_address": { 00:20:37.874 "trtype": "TCP", 00:20:37.874 "adrfam": "IPv4", 00:20:37.874 "traddr": "10.0.0.1", 00:20:37.874 "trsvcid": "60610" 00:20:37.874 }, 00:20:37.874 "auth": { 00:20:37.874 "state": "completed", 00:20:37.874 "digest": "sha384", 00:20:37.874 "dhgroup": "ffdhe3072" 00:20:37.874 } 00:20:37.874 } 00:20:37.874 ]' 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.874 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.132 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:38.132 18:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:39.066 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.066 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.066 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.066 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.066 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.066 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.066 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.066 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.066 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.324 18:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.890 00:20:39.890 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.890 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.890 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.149 { 00:20:40.149 "cntlid": 73, 00:20:40.149 "qid": 0, 00:20:40.149 "state": "enabled", 00:20:40.149 "thread": "nvmf_tgt_poll_group_000", 00:20:40.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.149 "listen_address": { 00:20:40.149 "trtype": "TCP", 00:20:40.149 "adrfam": "IPv4", 00:20:40.149 "traddr": "10.0.0.2", 00:20:40.149 "trsvcid": "4420" 00:20:40.149 }, 00:20:40.149 "peer_address": { 00:20:40.149 "trtype": "TCP", 00:20:40.149 "adrfam": "IPv4", 00:20:40.149 "traddr": "10.0.0.1", 00:20:40.149 "trsvcid": "60640" 00:20:40.149 }, 00:20:40.149 "auth": { 00:20:40.149 "state": "completed", 00:20:40.149 "digest": "sha384", 00:20:40.149 "dhgroup": "ffdhe4096" 00:20:40.149 } 00:20:40.149 } 00:20:40.149 ]' 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.149 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.407 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:40.407 18:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:41.341 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.341 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.341 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.341 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.341 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.341 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.341 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.341 18:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.599 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.164 00:20:42.164 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.164 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.164 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.422 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.422 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.422 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.422 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.422 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.422 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.422 { 00:20:42.422 "cntlid": 75, 00:20:42.422 "qid": 0, 00:20:42.422 "state": "enabled", 00:20:42.422 "thread": "nvmf_tgt_poll_group_000", 00:20:42.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:42.422 "listen_address": { 00:20:42.422 "trtype": "TCP", 00:20:42.422 "adrfam": "IPv4", 00:20:42.422 "traddr": "10.0.0.2", 00:20:42.422 "trsvcid": "4420" 00:20:42.422 }, 00:20:42.422 "peer_address": { 00:20:42.423 "trtype": "TCP", 00:20:42.423 "adrfam": "IPv4", 00:20:42.423 "traddr": "10.0.0.1", 00:20:42.423 "trsvcid": "49460" 00:20:42.423 }, 00:20:42.423 "auth": { 00:20:42.423 "state": "completed", 00:20:42.423 "digest": "sha384", 00:20:42.423 "dhgroup": "ffdhe4096" 00:20:42.423 } 00:20:42.423 } 00:20:42.423 ]' 00:20:42.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.423 18:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.680 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:42.681 18:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:43.614 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.614 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.614 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.614 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.614 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.614 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.614 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.614 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.872 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:43.872 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.872 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.872 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:43.872 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.872 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.872 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.872 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.872 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.130 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.130 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.130 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.130 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.388 00:20:44.388 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.388 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.388 18:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.646 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.647 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.647 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.647 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.647 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.647 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.647 { 00:20:44.647 "cntlid": 77, 00:20:44.647 "qid": 0, 00:20:44.647 "state": "enabled", 00:20:44.647 "thread": "nvmf_tgt_poll_group_000", 00:20:44.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:44.647 "listen_address": { 00:20:44.647 "trtype": "TCP", 00:20:44.647 "adrfam": "IPv4", 00:20:44.647 "traddr": "10.0.0.2", 00:20:44.647 "trsvcid": "4420" 00:20:44.647 }, 00:20:44.647 "peer_address": { 00:20:44.647 "trtype": "TCP", 00:20:44.647 "adrfam": "IPv4", 00:20:44.647 "traddr": "10.0.0.1", 00:20:44.647 "trsvcid": "49496" 00:20:44.647 }, 00:20:44.647 "auth": { 00:20:44.647 "state": "completed", 00:20:44.647 "digest": "sha384", 00:20:44.647 "dhgroup": "ffdhe4096" 00:20:44.647 } 00:20:44.647 } 00:20:44.647 ]' 00:20:44.647 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.647 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.647 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.647 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.647 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.905 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.905 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.905 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.163 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:45.163 18:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:46.096 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.096 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.096 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.096 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.096 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.097 18:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.663 00:20:46.663 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.663 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.663 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.921 { 00:20:46.921 "cntlid": 79, 00:20:46.921 "qid": 0, 00:20:46.921 "state": "enabled", 00:20:46.921 "thread": "nvmf_tgt_poll_group_000", 00:20:46.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:46.921 "listen_address": { 00:20:46.921 "trtype": "TCP", 00:20:46.921 "adrfam": "IPv4", 00:20:46.921 "traddr": "10.0.0.2", 00:20:46.921 "trsvcid": "4420" 00:20:46.921 }, 00:20:46.921 "peer_address": { 00:20:46.921 "trtype": "TCP", 00:20:46.921 "adrfam": "IPv4", 00:20:46.921 "traddr": "10.0.0.1", 00:20:46.921 "trsvcid": "49526" 00:20:46.921 }, 00:20:46.921 "auth": { 00:20:46.921 "state": "completed", 00:20:46.921 "digest": "sha384", 00:20:46.921 "dhgroup": "ffdhe4096" 00:20:46.921 } 00:20:46.921 } 00:20:46.921 ]' 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.921 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.179 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:47.179 18:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:48.113 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.113 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.113 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.113 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.113 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.113 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.113 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.113 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.113 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.371 18:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.936 00:20:48.936 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.936 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.936 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.193 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.193 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.193 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.193 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.194 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.194 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.194 { 00:20:49.194 "cntlid": 81, 00:20:49.194 "qid": 0, 00:20:49.194 "state": "enabled", 00:20:49.194 "thread": "nvmf_tgt_poll_group_000", 00:20:49.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:49.194 "listen_address": { 00:20:49.194 "trtype": "TCP", 00:20:49.194 "adrfam": "IPv4", 00:20:49.194 "traddr": "10.0.0.2", 00:20:49.194 "trsvcid": "4420" 00:20:49.194 }, 00:20:49.194 "peer_address": { 00:20:49.194 "trtype": "TCP", 00:20:49.194 "adrfam": "IPv4", 00:20:49.194 "traddr": "10.0.0.1", 00:20:49.194 "trsvcid": "49570" 00:20:49.194 }, 00:20:49.194 "auth": { 00:20:49.194 "state": "completed", 00:20:49.194 "digest": "sha384", 00:20:49.194 "dhgroup": "ffdhe6144" 00:20:49.194 } 00:20:49.194 } 00:20:49.194 ]' 00:20:49.194 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.452 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.452 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.452 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.452 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.452 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.452 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.452 18:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.709 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:49.710 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:50.642 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.642 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.642 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.642 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.642 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.642 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.642 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.642 18:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.899 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.900 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.464 00:20:51.464 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.464 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.464 18:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.722 { 00:20:51.722 "cntlid": 83, 00:20:51.722 "qid": 0, 00:20:51.722 "state": "enabled", 00:20:51.722 "thread": "nvmf_tgt_poll_group_000", 00:20:51.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.722 "listen_address": { 00:20:51.722 "trtype": "TCP", 00:20:51.722 "adrfam": "IPv4", 00:20:51.722 "traddr": "10.0.0.2", 00:20:51.722 "trsvcid": "4420" 00:20:51.722 }, 00:20:51.722 "peer_address": { 00:20:51.722 "trtype": "TCP", 00:20:51.722 "adrfam": "IPv4", 00:20:51.722 "traddr": "10.0.0.1", 00:20:51.722 "trsvcid": "59392" 00:20:51.722 }, 00:20:51.722 "auth": { 00:20:51.722 "state": "completed", 00:20:51.722 "digest": "sha384", 00:20:51.722 "dhgroup": "ffdhe6144" 00:20:51.722 } 00:20:51.722 } 00:20:51.722 ]' 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.722 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.014 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:52.014 18:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:20:52.975 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.975 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.975 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.975 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.975 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.975 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.975 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:52.975 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.233 18:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.799 00:20:53.799 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.800 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.800 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.058 { 00:20:54.058 "cntlid": 85, 00:20:54.058 "qid": 0, 00:20:54.058 "state": "enabled", 00:20:54.058 "thread": "nvmf_tgt_poll_group_000", 00:20:54.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.058 "listen_address": { 00:20:54.058 "trtype": "TCP", 00:20:54.058 "adrfam": "IPv4", 00:20:54.058 "traddr": "10.0.0.2", 00:20:54.058 "trsvcid": "4420" 00:20:54.058 }, 00:20:54.058 "peer_address": { 00:20:54.058 "trtype": "TCP", 00:20:54.058 "adrfam": "IPv4", 00:20:54.058 "traddr": "10.0.0.1", 00:20:54.058 "trsvcid": "59412" 00:20:54.058 }, 00:20:54.058 "auth": { 00:20:54.058 "state": "completed", 00:20:54.058 "digest": "sha384", 00:20:54.058 "dhgroup": "ffdhe6144" 00:20:54.058 } 00:20:54.058 } 00:20:54.058 ]' 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.058 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.625 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:54.625 18:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:20:55.189 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.189 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.189 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.189 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.189 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.189 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.189 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.189 18:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.447 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:55.447 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.447 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.447 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:55.447 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:55.447 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.447 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:55.447 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.447 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.704 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.704 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:55.704 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.704 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.269 00:20:56.270 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.270 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.270 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.527 { 00:20:56.527 "cntlid": 87, 00:20:56.527 "qid": 0, 00:20:56.527 "state": "enabled", 00:20:56.527 "thread": "nvmf_tgt_poll_group_000", 00:20:56.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.527 "listen_address": { 00:20:56.527 "trtype": "TCP", 00:20:56.527 "adrfam": "IPv4", 00:20:56.527 "traddr": "10.0.0.2", 00:20:56.527 "trsvcid": "4420" 00:20:56.527 }, 00:20:56.527 "peer_address": { 00:20:56.527 "trtype": "TCP", 00:20:56.527 "adrfam": "IPv4", 00:20:56.527 "traddr": "10.0.0.1", 00:20:56.527 "trsvcid": "59432" 00:20:56.527 }, 00:20:56.527 "auth": { 00:20:56.527 "state": "completed", 00:20:56.527 "digest": "sha384", 00:20:56.527 "dhgroup": "ffdhe6144" 00:20:56.527 } 00:20:56.527 } 00:20:56.527 ]' 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.527 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.528 18:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.786 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:56.786 18:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:20:57.719 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.719 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.719 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.719 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.719 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.719 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.719 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.719 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.719 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.978 18:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.910 00:20:58.910 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.910 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.910 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.168 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.168 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.168 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.168 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.168 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.168 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.168 { 00:20:59.168 "cntlid": 89, 00:20:59.168 "qid": 0, 00:20:59.168 "state": "enabled", 00:20:59.168 "thread": "nvmf_tgt_poll_group_000", 00:20:59.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.168 "listen_address": { 00:20:59.168 "trtype": "TCP", 00:20:59.168 "adrfam": "IPv4", 00:20:59.168 "traddr": "10.0.0.2", 00:20:59.168 "trsvcid": "4420" 00:20:59.168 }, 00:20:59.168 "peer_address": { 00:20:59.168 "trtype": "TCP", 00:20:59.168 "adrfam": "IPv4", 00:20:59.168 "traddr": "10.0.0.1", 00:20:59.168 "trsvcid": "59456" 00:20:59.168 }, 00:20:59.168 "auth": { 00:20:59.168 "state": "completed", 00:20:59.168 "digest": "sha384", 00:20:59.168 "dhgroup": "ffdhe8192" 00:20:59.168 } 00:20:59.168 } 00:20:59.168 ]' 00:20:59.168 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.168 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.168 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.425 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.425 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.425 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.425 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.425 18:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.682 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:20:59.682 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:00.614 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.614 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.614 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.614 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.614 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.614 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.614 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.614 18:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.871 18:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.803 00:21:01.803 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.803 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.803 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.803 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.803 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.803 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.803 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.803 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.803 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.803 { 00:21:01.803 "cntlid": 91, 00:21:01.803 "qid": 0, 00:21:01.803 "state": "enabled", 00:21:01.803 "thread": "nvmf_tgt_poll_group_000", 00:21:01.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.803 "listen_address": { 00:21:01.803 "trtype": "TCP", 00:21:01.803 "adrfam": "IPv4", 00:21:01.803 "traddr": "10.0.0.2", 00:21:01.803 "trsvcid": "4420" 00:21:01.803 }, 00:21:01.803 "peer_address": { 00:21:01.803 "trtype": "TCP", 00:21:01.803 "adrfam": "IPv4", 00:21:01.803 "traddr": "10.0.0.1", 00:21:01.803 "trsvcid": "45554" 00:21:01.803 }, 00:21:01.803 "auth": { 00:21:01.803 "state": "completed", 00:21:01.803 "digest": "sha384", 00:21:01.803 "dhgroup": "ffdhe8192" 00:21:01.803 } 00:21:01.803 } 00:21:01.803 ]' 00:21:01.803 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.061 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.061 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.061 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.061 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.061 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.061 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.061 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.319 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:02.319 18:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:03.252 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.252 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.252 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.252 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.252 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.252 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.252 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.252 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.511 18:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.444 00:21:04.444 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.444 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.444 18:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.444 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.444 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.444 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.444 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.444 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.703 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.703 { 00:21:04.703 "cntlid": 93, 00:21:04.703 "qid": 0, 00:21:04.703 "state": "enabled", 00:21:04.703 "thread": "nvmf_tgt_poll_group_000", 00:21:04.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.703 "listen_address": { 00:21:04.703 "trtype": "TCP", 00:21:04.703 "adrfam": "IPv4", 00:21:04.703 "traddr": "10.0.0.2", 00:21:04.703 "trsvcid": "4420" 00:21:04.703 }, 00:21:04.703 "peer_address": { 00:21:04.703 "trtype": "TCP", 00:21:04.703 "adrfam": "IPv4", 00:21:04.703 "traddr": "10.0.0.1", 00:21:04.703 "trsvcid": "45574" 00:21:04.703 }, 00:21:04.703 "auth": { 00:21:04.703 "state": "completed", 00:21:04.703 "digest": "sha384", 00:21:04.703 "dhgroup": "ffdhe8192" 00:21:04.703 } 00:21:04.703 } 00:21:04.703 ]' 00:21:04.703 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.703 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.703 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.703 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.703 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.703 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.703 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.703 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.961 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:04.961 18:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:05.893 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.893 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.893 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.893 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.893 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.893 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.893 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:05.893 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.152 18:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.085 00:21:07.085 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.085 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.085 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.343 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.343 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.343 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.343 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.343 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.343 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.343 { 00:21:07.343 "cntlid": 95, 00:21:07.343 "qid": 0, 00:21:07.343 "state": "enabled", 00:21:07.343 "thread": "nvmf_tgt_poll_group_000", 00:21:07.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:07.343 "listen_address": { 00:21:07.343 "trtype": "TCP", 00:21:07.343 "adrfam": "IPv4", 00:21:07.343 "traddr": "10.0.0.2", 00:21:07.343 "trsvcid": "4420" 00:21:07.343 }, 00:21:07.343 "peer_address": { 00:21:07.343 "trtype": "TCP", 00:21:07.343 "adrfam": "IPv4", 00:21:07.343 "traddr": "10.0.0.1", 00:21:07.343 "trsvcid": "45602" 00:21:07.343 }, 00:21:07.343 "auth": { 00:21:07.343 "state": "completed", 00:21:07.343 "digest": "sha384", 00:21:07.343 "dhgroup": "ffdhe8192" 00:21:07.343 } 00:21:07.343 } 00:21:07.343 ]' 00:21:07.343 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.344 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.344 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.344 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.344 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.601 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.601 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.601 18:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.859 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:07.859 18:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:08.792 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.793 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.793 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.793 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.793 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.793 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:08.793 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.793 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.793 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.793 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.050 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.308 00:21:09.308 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.308 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.308 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.566 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.566 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.566 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.566 18:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.566 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.566 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.567 { 00:21:09.567 "cntlid": 97, 00:21:09.567 "qid": 0, 00:21:09.567 "state": "enabled", 00:21:09.567 "thread": "nvmf_tgt_poll_group_000", 00:21:09.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.567 "listen_address": { 00:21:09.567 "trtype": "TCP", 00:21:09.567 "adrfam": "IPv4", 00:21:09.567 "traddr": "10.0.0.2", 00:21:09.567 "trsvcid": "4420" 00:21:09.567 }, 00:21:09.567 "peer_address": { 00:21:09.567 "trtype": "TCP", 00:21:09.567 "adrfam": "IPv4", 00:21:09.567 "traddr": "10.0.0.1", 00:21:09.567 "trsvcid": "45630" 00:21:09.567 }, 00:21:09.567 "auth": { 00:21:09.567 "state": "completed", 00:21:09.567 "digest": "sha512", 00:21:09.567 "dhgroup": "null" 00:21:09.567 } 00:21:09.567 } 00:21:09.567 ]' 00:21:09.567 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.567 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.567 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.567 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:09.567 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.567 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.567 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.567 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.132 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:10.133 18:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.065 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.631 00:21:11.631 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.631 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.631 18:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.888 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.888 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.888 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.888 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.888 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.888 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.888 { 00:21:11.888 "cntlid": 99, 00:21:11.888 "qid": 0, 00:21:11.888 "state": "enabled", 00:21:11.888 "thread": "nvmf_tgt_poll_group_000", 00:21:11.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.888 "listen_address": { 00:21:11.889 "trtype": "TCP", 00:21:11.889 "adrfam": "IPv4", 00:21:11.889 "traddr": "10.0.0.2", 00:21:11.889 "trsvcid": "4420" 00:21:11.889 }, 00:21:11.889 "peer_address": { 00:21:11.889 "trtype": "TCP", 00:21:11.889 "adrfam": "IPv4", 00:21:11.889 "traddr": "10.0.0.1", 00:21:11.889 "trsvcid": "41358" 00:21:11.889 }, 00:21:11.889 "auth": { 00:21:11.889 "state": "completed", 00:21:11.889 "digest": "sha512", 00:21:11.889 "dhgroup": "null" 00:21:11.889 } 00:21:11.889 } 00:21:11.889 ]' 00:21:11.889 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.889 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.889 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.889 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:11.889 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.889 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.889 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.889 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.146 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:12.146 18:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:13.083 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.083 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.083 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.083 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.083 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.083 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.083 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.083 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.341 18:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.907 00:21:13.907 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.907 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.907 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.165 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.166 { 00:21:14.166 "cntlid": 101, 00:21:14.166 "qid": 0, 00:21:14.166 "state": "enabled", 00:21:14.166 "thread": "nvmf_tgt_poll_group_000", 00:21:14.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.166 "listen_address": { 00:21:14.166 "trtype": "TCP", 00:21:14.166 "adrfam": "IPv4", 00:21:14.166 "traddr": "10.0.0.2", 00:21:14.166 "trsvcid": "4420" 00:21:14.166 }, 00:21:14.166 "peer_address": { 00:21:14.166 "trtype": "TCP", 00:21:14.166 "adrfam": "IPv4", 00:21:14.166 "traddr": "10.0.0.1", 00:21:14.166 "trsvcid": "41382" 00:21:14.166 }, 00:21:14.166 "auth": { 00:21:14.166 "state": "completed", 00:21:14.166 "digest": "sha512", 00:21:14.166 "dhgroup": "null" 00:21:14.166 } 00:21:14.166 } 00:21:14.166 ]' 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.166 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.423 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:14.424 18:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:15.355 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.355 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.355 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.355 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.355 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.355 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.355 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:15.355 18:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.613 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.872 00:21:15.872 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.872 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.872 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.439 { 00:21:16.439 "cntlid": 103, 00:21:16.439 "qid": 0, 00:21:16.439 "state": "enabled", 00:21:16.439 "thread": "nvmf_tgt_poll_group_000", 00:21:16.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.439 "listen_address": { 00:21:16.439 "trtype": "TCP", 00:21:16.439 "adrfam": "IPv4", 00:21:16.439 "traddr": "10.0.0.2", 00:21:16.439 "trsvcid": "4420" 00:21:16.439 }, 00:21:16.439 "peer_address": { 00:21:16.439 "trtype": "TCP", 00:21:16.439 "adrfam": "IPv4", 00:21:16.439 "traddr": "10.0.0.1", 00:21:16.439 "trsvcid": "41406" 00:21:16.439 }, 00:21:16.439 "auth": { 00:21:16.439 "state": "completed", 00:21:16.439 "digest": "sha512", 00:21:16.439 "dhgroup": "null" 00:21:16.439 } 00:21:16.439 } 00:21:16.439 ]' 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.439 18:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.697 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:16.697 18:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:17.713 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.713 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.713 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.713 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.713 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.713 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.713 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.713 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.713 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.971 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.230 00:21:18.230 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.230 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.230 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.488 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.488 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.488 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.488 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.488 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.488 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.488 { 00:21:18.488 "cntlid": 105, 00:21:18.488 "qid": 0, 00:21:18.488 "state": "enabled", 00:21:18.488 "thread": "nvmf_tgt_poll_group_000", 00:21:18.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.488 "listen_address": { 00:21:18.488 "trtype": "TCP", 00:21:18.488 "adrfam": "IPv4", 00:21:18.488 "traddr": "10.0.0.2", 00:21:18.488 "trsvcid": "4420" 00:21:18.488 }, 00:21:18.488 "peer_address": { 00:21:18.488 "trtype": "TCP", 00:21:18.488 "adrfam": "IPv4", 00:21:18.488 "traddr": "10.0.0.1", 00:21:18.488 "trsvcid": "41438" 00:21:18.488 }, 00:21:18.488 "auth": { 00:21:18.488 "state": "completed", 00:21:18.488 "digest": "sha512", 00:21:18.488 "dhgroup": "ffdhe2048" 00:21:18.488 } 00:21:18.488 } 00:21:18.488 ]' 00:21:18.488 18:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.488 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.488 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.488 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.488 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.747 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.747 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.747 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.005 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:19.005 18:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.939 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.201 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.201 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.201 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.201 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.460 00:21:20.460 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.460 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.460 18:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.719 { 00:21:20.719 "cntlid": 107, 00:21:20.719 "qid": 0, 00:21:20.719 "state": "enabled", 00:21:20.719 "thread": "nvmf_tgt_poll_group_000", 00:21:20.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:20.719 "listen_address": { 00:21:20.719 "trtype": "TCP", 00:21:20.719 "adrfam": "IPv4", 00:21:20.719 "traddr": "10.0.0.2", 00:21:20.719 "trsvcid": "4420" 00:21:20.719 }, 00:21:20.719 "peer_address": { 00:21:20.719 "trtype": "TCP", 00:21:20.719 "adrfam": "IPv4", 00:21:20.719 "traddr": "10.0.0.1", 00:21:20.719 "trsvcid": "41464" 00:21:20.719 }, 00:21:20.719 "auth": { 00:21:20.719 "state": "completed", 00:21:20.719 "digest": "sha512", 00:21:20.719 "dhgroup": "ffdhe2048" 00:21:20.719 } 00:21:20.719 } 00:21:20.719 ]' 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.719 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.287 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:21.287 18:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:22.219 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.219 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.219 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.219 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.219 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.219 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.219 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:22.219 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.522 18:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.782 00:21:22.782 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.782 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.782 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.040 { 00:21:23.040 "cntlid": 109, 00:21:23.040 "qid": 0, 00:21:23.040 "state": "enabled", 00:21:23.040 "thread": "nvmf_tgt_poll_group_000", 00:21:23.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:23.040 "listen_address": { 00:21:23.040 "trtype": "TCP", 00:21:23.040 "adrfam": "IPv4", 00:21:23.040 "traddr": "10.0.0.2", 00:21:23.040 "trsvcid": "4420" 00:21:23.040 }, 00:21:23.040 "peer_address": { 00:21:23.040 "trtype": "TCP", 00:21:23.040 "adrfam": "IPv4", 00:21:23.040 "traddr": "10.0.0.1", 00:21:23.040 "trsvcid": "42100" 00:21:23.040 }, 00:21:23.040 "auth": { 00:21:23.040 "state": "completed", 00:21:23.040 "digest": "sha512", 00:21:23.040 "dhgroup": "ffdhe2048" 00:21:23.040 } 00:21:23.040 } 00:21:23.040 ]' 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.040 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.298 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:23.298 18:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:24.232 18:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.232 18:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.232 18:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.232 18:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.232 18:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.232 18:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.232 18:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:24.232 18:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.490 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.056 00:21:25.056 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.056 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.056 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.056 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.316 { 00:21:25.316 "cntlid": 111, 00:21:25.316 "qid": 0, 00:21:25.316 "state": "enabled", 00:21:25.316 "thread": "nvmf_tgt_poll_group_000", 00:21:25.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:25.316 "listen_address": { 00:21:25.316 "trtype": "TCP", 00:21:25.316 "adrfam": "IPv4", 00:21:25.316 "traddr": "10.0.0.2", 00:21:25.316 "trsvcid": "4420" 00:21:25.316 }, 00:21:25.316 "peer_address": { 00:21:25.316 "trtype": "TCP", 00:21:25.316 "adrfam": "IPv4", 00:21:25.316 "traddr": "10.0.0.1", 00:21:25.316 "trsvcid": "42122" 00:21:25.316 }, 00:21:25.316 "auth": { 00:21:25.316 "state": "completed", 00:21:25.316 "digest": "sha512", 00:21:25.316 "dhgroup": "ffdhe2048" 00:21:25.316 } 00:21:25.316 } 00:21:25.316 ]' 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.316 18:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.573 18:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:25.574 18:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:26.508 18:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.508 18:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.508 18:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.508 18:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.508 18:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.508 18:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.508 18:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.508 18:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.508 18:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.765 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.023 00:21:27.023 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.023 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.023 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.281 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.281 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.281 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.281 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.281 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.281 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.281 { 00:21:27.281 "cntlid": 113, 00:21:27.281 "qid": 0, 00:21:27.281 "state": "enabled", 00:21:27.281 "thread": "nvmf_tgt_poll_group_000", 00:21:27.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.281 "listen_address": { 00:21:27.281 "trtype": "TCP", 00:21:27.281 "adrfam": "IPv4", 00:21:27.281 "traddr": "10.0.0.2", 00:21:27.281 "trsvcid": "4420" 00:21:27.281 }, 00:21:27.281 "peer_address": { 00:21:27.281 "trtype": "TCP", 00:21:27.281 "adrfam": "IPv4", 00:21:27.281 "traddr": "10.0.0.1", 00:21:27.281 "trsvcid": "42142" 00:21:27.281 }, 00:21:27.281 "auth": { 00:21:27.281 "state": "completed", 00:21:27.281 "digest": "sha512", 00:21:27.281 "dhgroup": "ffdhe3072" 00:21:27.281 } 00:21:27.281 } 00:21:27.281 ]' 00:21:27.281 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.539 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.539 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.539 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.539 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.539 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.539 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.539 18:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.797 18:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:27.797 18:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:28.730 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.730 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.730 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.730 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.730 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.731 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.731 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.731 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.989 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.246 00:21:29.246 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.246 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.246 18:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.811 { 00:21:29.811 "cntlid": 115, 00:21:29.811 "qid": 0, 00:21:29.811 "state": "enabled", 00:21:29.811 "thread": "nvmf_tgt_poll_group_000", 00:21:29.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.811 "listen_address": { 00:21:29.811 "trtype": "TCP", 00:21:29.811 "adrfam": "IPv4", 00:21:29.811 "traddr": "10.0.0.2", 00:21:29.811 "trsvcid": "4420" 00:21:29.811 }, 00:21:29.811 "peer_address": { 00:21:29.811 "trtype": "TCP", 00:21:29.811 "adrfam": "IPv4", 00:21:29.811 "traddr": "10.0.0.1", 00:21:29.811 "trsvcid": "42172" 00:21:29.811 }, 00:21:29.811 "auth": { 00:21:29.811 "state": "completed", 00:21:29.811 "digest": "sha512", 00:21:29.811 "dhgroup": "ffdhe3072" 00:21:29.811 } 00:21:29.811 } 00:21:29.811 ]' 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.811 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.069 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:30.070 18:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:31.002 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.002 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.002 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.002 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.002 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.002 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.002 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.002 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.260 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:31.260 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.260 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.261 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:31.261 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.261 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.261 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.261 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.261 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.261 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.261 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.261 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.261 18:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.519 00:21:31.519 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.519 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.519 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.777 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.777 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.777 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.777 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.777 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.777 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.777 { 00:21:31.777 "cntlid": 117, 00:21:31.777 "qid": 0, 00:21:31.778 "state": "enabled", 00:21:31.778 "thread": "nvmf_tgt_poll_group_000", 00:21:31.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:31.778 "listen_address": { 00:21:31.778 "trtype": "TCP", 00:21:31.778 "adrfam": "IPv4", 00:21:31.778 "traddr": "10.0.0.2", 00:21:31.778 "trsvcid": "4420" 00:21:31.778 }, 00:21:31.778 "peer_address": { 00:21:31.778 "trtype": "TCP", 00:21:31.778 "adrfam": "IPv4", 00:21:31.778 "traddr": "10.0.0.1", 00:21:31.778 "trsvcid": "59500" 00:21:31.778 }, 00:21:31.778 "auth": { 00:21:31.778 "state": "completed", 00:21:31.778 "digest": "sha512", 00:21:31.778 "dhgroup": "ffdhe3072" 00:21:31.778 } 00:21:31.778 } 00:21:31.778 ]' 00:21:31.778 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.778 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.778 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.036 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:32.036 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.036 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.036 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.036 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.295 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:32.295 18:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:33.227 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.227 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.227 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.227 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.227 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.227 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.227 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.227 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.485 18:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.742 00:21:33.742 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.742 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.742 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.000 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.000 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.000 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.000 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.258 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.258 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.258 { 00:21:34.258 "cntlid": 119, 00:21:34.258 "qid": 0, 00:21:34.258 "state": "enabled", 00:21:34.258 "thread": "nvmf_tgt_poll_group_000", 00:21:34.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.258 "listen_address": { 00:21:34.258 "trtype": "TCP", 00:21:34.258 "adrfam": "IPv4", 00:21:34.258 "traddr": "10.0.0.2", 00:21:34.258 "trsvcid": "4420" 00:21:34.258 }, 00:21:34.258 "peer_address": { 00:21:34.258 "trtype": "TCP", 00:21:34.258 "adrfam": "IPv4", 00:21:34.258 "traddr": "10.0.0.1", 00:21:34.258 "trsvcid": "59526" 00:21:34.258 }, 00:21:34.258 "auth": { 00:21:34.258 "state": "completed", 00:21:34.258 "digest": "sha512", 00:21:34.258 "dhgroup": "ffdhe3072" 00:21:34.258 } 00:21:34.258 } 00:21:34.258 ]' 00:21:34.258 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.258 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.258 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.258 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.259 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.259 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.259 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.259 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.516 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:34.516 18:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:35.449 18:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.449 18:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.449 18:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.449 18:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.449 18:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.449 18:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.449 18:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.449 18:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.449 18:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.707 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.964 00:21:35.964 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.964 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.964 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.531 { 00:21:36.531 "cntlid": 121, 00:21:36.531 "qid": 0, 00:21:36.531 "state": "enabled", 00:21:36.531 "thread": "nvmf_tgt_poll_group_000", 00:21:36.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.531 "listen_address": { 00:21:36.531 "trtype": "TCP", 00:21:36.531 "adrfam": "IPv4", 00:21:36.531 "traddr": "10.0.0.2", 00:21:36.531 "trsvcid": "4420" 00:21:36.531 }, 00:21:36.531 "peer_address": { 00:21:36.531 "trtype": "TCP", 00:21:36.531 "adrfam": "IPv4", 00:21:36.531 "traddr": "10.0.0.1", 00:21:36.531 "trsvcid": "59542" 00:21:36.531 }, 00:21:36.531 "auth": { 00:21:36.531 "state": "completed", 00:21:36.531 "digest": "sha512", 00:21:36.531 "dhgroup": "ffdhe4096" 00:21:36.531 } 00:21:36.531 } 00:21:36.531 ]' 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.531 18:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.789 18:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:36.789 18:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:37.722 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.722 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.722 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.722 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.722 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.722 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.722 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.722 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.980 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.238 00:21:38.238 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.238 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.238 18:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.496 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.496 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.496 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.496 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.496 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.496 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.496 { 00:21:38.496 "cntlid": 123, 00:21:38.496 "qid": 0, 00:21:38.496 "state": "enabled", 00:21:38.496 "thread": "nvmf_tgt_poll_group_000", 00:21:38.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.496 "listen_address": { 00:21:38.496 "trtype": "TCP", 00:21:38.496 "adrfam": "IPv4", 00:21:38.496 "traddr": "10.0.0.2", 00:21:38.496 "trsvcid": "4420" 00:21:38.496 }, 00:21:38.496 "peer_address": { 00:21:38.496 "trtype": "TCP", 00:21:38.496 "adrfam": "IPv4", 00:21:38.496 "traddr": "10.0.0.1", 00:21:38.496 "trsvcid": "59554" 00:21:38.496 }, 00:21:38.496 "auth": { 00:21:38.496 "state": "completed", 00:21:38.496 "digest": "sha512", 00:21:38.496 "dhgroup": "ffdhe4096" 00:21:38.496 } 00:21:38.496 } 00:21:38.496 ]' 00:21:38.496 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.754 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.754 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.754 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.754 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.754 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.754 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.754 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.011 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:39.012 18:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:39.945 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.945 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.945 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.945 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.945 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.945 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.945 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.945 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.203 18:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.769 00:21:40.769 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.769 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.769 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.769 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.769 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.769 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.769 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.027 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.027 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.027 { 00:21:41.027 "cntlid": 125, 00:21:41.027 "qid": 0, 00:21:41.027 "state": "enabled", 00:21:41.027 "thread": "nvmf_tgt_poll_group_000", 00:21:41.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.027 "listen_address": { 00:21:41.027 "trtype": "TCP", 00:21:41.027 "adrfam": "IPv4", 00:21:41.027 "traddr": "10.0.0.2", 00:21:41.027 "trsvcid": "4420" 00:21:41.027 }, 00:21:41.027 "peer_address": { 00:21:41.027 "trtype": "TCP", 00:21:41.027 "adrfam": "IPv4", 00:21:41.027 "traddr": "10.0.0.1", 00:21:41.027 "trsvcid": "59588" 00:21:41.027 }, 00:21:41.027 "auth": { 00:21:41.027 "state": "completed", 00:21:41.027 "digest": "sha512", 00:21:41.027 "dhgroup": "ffdhe4096" 00:21:41.027 } 00:21:41.027 } 00:21:41.027 ]' 00:21:41.027 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.027 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.027 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.027 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.027 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.027 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.027 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.027 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.285 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:41.285 18:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:42.276 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.276 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.276 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.276 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.276 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.276 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.276 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.276 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.534 18:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.791 00:21:42.791 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.791 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.791 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.049 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.049 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.049 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.049 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.049 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.049 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.049 { 00:21:43.049 "cntlid": 127, 00:21:43.049 "qid": 0, 00:21:43.049 "state": "enabled", 00:21:43.049 "thread": "nvmf_tgt_poll_group_000", 00:21:43.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.049 "listen_address": { 00:21:43.049 "trtype": "TCP", 00:21:43.049 "adrfam": "IPv4", 00:21:43.049 "traddr": "10.0.0.2", 00:21:43.049 "trsvcid": "4420" 00:21:43.049 }, 00:21:43.049 "peer_address": { 00:21:43.049 "trtype": "TCP", 00:21:43.049 "adrfam": "IPv4", 00:21:43.049 "traddr": "10.0.0.1", 00:21:43.049 "trsvcid": "49498" 00:21:43.049 }, 00:21:43.049 "auth": { 00:21:43.049 "state": "completed", 00:21:43.049 "digest": "sha512", 00:21:43.049 "dhgroup": "ffdhe4096" 00:21:43.049 } 00:21:43.049 } 00:21:43.049 ]' 00:21:43.049 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.049 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.049 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.308 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:43.308 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.308 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.308 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.308 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.566 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:43.566 18:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:44.499 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.499 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.499 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.499 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.499 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.499 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.499 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.499 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.499 18:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.756 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.322 00:21:45.322 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.322 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.322 18:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.580 { 00:21:45.580 "cntlid": 129, 00:21:45.580 "qid": 0, 00:21:45.580 "state": "enabled", 00:21:45.580 "thread": "nvmf_tgt_poll_group_000", 00:21:45.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.580 "listen_address": { 00:21:45.580 "trtype": "TCP", 00:21:45.580 "adrfam": "IPv4", 00:21:45.580 "traddr": "10.0.0.2", 00:21:45.580 "trsvcid": "4420" 00:21:45.580 }, 00:21:45.580 "peer_address": { 00:21:45.580 "trtype": "TCP", 00:21:45.580 "adrfam": "IPv4", 00:21:45.580 "traddr": "10.0.0.1", 00:21:45.580 "trsvcid": "49510" 00:21:45.580 }, 00:21:45.580 "auth": { 00:21:45.580 "state": "completed", 00:21:45.580 "digest": "sha512", 00:21:45.580 "dhgroup": "ffdhe6144" 00:21:45.580 } 00:21:45.580 } 00:21:45.580 ]' 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.580 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.837 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:45.837 18:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:46.770 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.770 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.770 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.770 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.770 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.770 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.770 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.770 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:47.028 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:47.028 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.028 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.028 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:47.028 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:47.028 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.028 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.028 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.028 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.286 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.286 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.286 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.286 18:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.851 00:21:47.851 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.851 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.851 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.851 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.851 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.109 { 00:21:48.109 "cntlid": 131, 00:21:48.109 "qid": 0, 00:21:48.109 "state": "enabled", 00:21:48.109 "thread": "nvmf_tgt_poll_group_000", 00:21:48.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:48.109 "listen_address": { 00:21:48.109 "trtype": "TCP", 00:21:48.109 "adrfam": "IPv4", 00:21:48.109 "traddr": "10.0.0.2", 00:21:48.109 "trsvcid": "4420" 00:21:48.109 }, 00:21:48.109 "peer_address": { 00:21:48.109 "trtype": "TCP", 00:21:48.109 "adrfam": "IPv4", 00:21:48.109 "traddr": "10.0.0.1", 00:21:48.109 "trsvcid": "49538" 00:21:48.109 }, 00:21:48.109 "auth": { 00:21:48.109 "state": "completed", 00:21:48.109 "digest": "sha512", 00:21:48.109 "dhgroup": "ffdhe6144" 00:21:48.109 } 00:21:48.109 } 00:21:48.109 ]' 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.109 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.367 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:48.367 18:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:49.300 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.300 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.300 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.300 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.300 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.300 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.300 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.300 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.558 18:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.123 00:21:50.123 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.123 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.123 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.381 { 00:21:50.381 "cntlid": 133, 00:21:50.381 "qid": 0, 00:21:50.381 "state": "enabled", 00:21:50.381 "thread": "nvmf_tgt_poll_group_000", 00:21:50.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.381 "listen_address": { 00:21:50.381 "trtype": "TCP", 00:21:50.381 "adrfam": "IPv4", 00:21:50.381 "traddr": "10.0.0.2", 00:21:50.381 "trsvcid": "4420" 00:21:50.381 }, 00:21:50.381 "peer_address": { 00:21:50.381 "trtype": "TCP", 00:21:50.381 "adrfam": "IPv4", 00:21:50.381 "traddr": "10.0.0.1", 00:21:50.381 "trsvcid": "49568" 00:21:50.381 }, 00:21:50.381 "auth": { 00:21:50.381 "state": "completed", 00:21:50.381 "digest": "sha512", 00:21:50.381 "dhgroup": "ffdhe6144" 00:21:50.381 } 00:21:50.381 } 00:21:50.381 ]' 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.381 18:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.639 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:50.639 18:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:21:51.570 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.570 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.570 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.570 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.570 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.570 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.570 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.570 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.828 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.394 00:21:52.394 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.394 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.394 18:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.652 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.652 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.652 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.652 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.652 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.652 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.652 { 00:21:52.652 "cntlid": 135, 00:21:52.652 "qid": 0, 00:21:52.652 "state": "enabled", 00:21:52.652 "thread": "nvmf_tgt_poll_group_000", 00:21:52.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.652 "listen_address": { 00:21:52.652 "trtype": "TCP", 00:21:52.652 "adrfam": "IPv4", 00:21:52.652 "traddr": "10.0.0.2", 00:21:52.652 "trsvcid": "4420" 00:21:52.652 }, 00:21:52.652 "peer_address": { 00:21:52.652 "trtype": "TCP", 00:21:52.652 "adrfam": "IPv4", 00:21:52.652 "traddr": "10.0.0.1", 00:21:52.652 "trsvcid": "39838" 00:21:52.652 }, 00:21:52.652 "auth": { 00:21:52.652 "state": "completed", 00:21:52.652 "digest": "sha512", 00:21:52.652 "dhgroup": "ffdhe6144" 00:21:52.652 } 00:21:52.652 } 00:21:52.652 ]' 00:21:52.652 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.909 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.909 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.909 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:52.909 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.909 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.909 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.909 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.166 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:53.166 18:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:21:54.099 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.099 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.099 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.099 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.099 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.099 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.099 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.099 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.099 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.357 18:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.290 00:21:55.290 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.290 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.290 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.548 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.548 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.548 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.548 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.548 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.548 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.548 { 00:21:55.548 "cntlid": 137, 00:21:55.548 "qid": 0, 00:21:55.548 "state": "enabled", 00:21:55.548 "thread": "nvmf_tgt_poll_group_000", 00:21:55.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.548 "listen_address": { 00:21:55.548 "trtype": "TCP", 00:21:55.548 "adrfam": "IPv4", 00:21:55.548 "traddr": "10.0.0.2", 00:21:55.548 "trsvcid": "4420" 00:21:55.548 }, 00:21:55.548 "peer_address": { 00:21:55.548 "trtype": "TCP", 00:21:55.548 "adrfam": "IPv4", 00:21:55.548 "traddr": "10.0.0.1", 00:21:55.548 "trsvcid": "39874" 00:21:55.548 }, 00:21:55.548 "auth": { 00:21:55.548 "state": "completed", 00:21:55.548 "digest": "sha512", 00:21:55.548 "dhgroup": "ffdhe8192" 00:21:55.548 } 00:21:55.548 } 00:21:55.548 ]' 00:21:55.548 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.548 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.548 18:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.548 18:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.548 18:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.548 18:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.548 18:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.548 18:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.806 18:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:55.806 18:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:21:56.739 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.739 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.739 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.739 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.739 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.739 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.739 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.739 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.997 18:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.930 00:21:57.930 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.930 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.930 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.188 { 00:21:58.188 "cntlid": 139, 00:21:58.188 "qid": 0, 00:21:58.188 "state": "enabled", 00:21:58.188 "thread": "nvmf_tgt_poll_group_000", 00:21:58.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.188 "listen_address": { 00:21:58.188 "trtype": "TCP", 00:21:58.188 "adrfam": "IPv4", 00:21:58.188 "traddr": "10.0.0.2", 00:21:58.188 "trsvcid": "4420" 00:21:58.188 }, 00:21:58.188 "peer_address": { 00:21:58.188 "trtype": "TCP", 00:21:58.188 "adrfam": "IPv4", 00:21:58.188 "traddr": "10.0.0.1", 00:21:58.188 "trsvcid": "39906" 00:21:58.188 }, 00:21:58.188 "auth": { 00:21:58.188 "state": "completed", 00:21:58.188 "digest": "sha512", 00:21:58.188 "dhgroup": "ffdhe8192" 00:21:58.188 } 00:21:58.188 } 00:21:58.188 ]' 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.188 18:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.445 18:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:58.446 18:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: --dhchap-ctrl-secret DHHC-1:02:ZjNlNGE0ZGJiYWNiZGRmMzc5MTFiOTg1NTZhOTRhZDJhMDYwZmE1OTM0YjE0Yjdi/7Giqg==: 00:21:59.379 18:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.379 18:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.379 18:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.379 18:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.379 18:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.379 18:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.379 18:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.379 18:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.637 18:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.570 00:22:00.570 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.570 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.570 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.828 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.828 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.828 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.828 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.828 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.828 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.828 { 00:22:00.828 "cntlid": 141, 00:22:00.828 "qid": 0, 00:22:00.828 "state": "enabled", 00:22:00.828 "thread": "nvmf_tgt_poll_group_000", 00:22:00.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.828 "listen_address": { 00:22:00.828 "trtype": "TCP", 00:22:00.828 "adrfam": "IPv4", 00:22:00.828 "traddr": "10.0.0.2", 00:22:00.828 "trsvcid": "4420" 00:22:00.828 }, 00:22:00.829 "peer_address": { 00:22:00.829 "trtype": "TCP", 00:22:00.829 "adrfam": "IPv4", 00:22:00.829 "traddr": "10.0.0.1", 00:22:00.829 "trsvcid": "39924" 00:22:00.829 }, 00:22:00.829 "auth": { 00:22:00.829 "state": "completed", 00:22:00.829 "digest": "sha512", 00:22:00.829 "dhgroup": "ffdhe8192" 00:22:00.829 } 00:22:00.829 } 00:22:00.829 ]' 00:22:00.829 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.829 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.829 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.829 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.829 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.086 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.086 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.086 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.344 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:22:01.345 18:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:01:OTNhMjlkMDgwZmFjNjFlNmI0MzJhZTQ0MTIwMDdhNmb8hk4P: 00:22:02.277 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.277 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.277 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.277 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.277 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.277 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.277 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:02.277 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.534 18:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.487 00:22:03.487 18:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.487 18:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.487 18:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.745 { 00:22:03.745 "cntlid": 143, 00:22:03.745 "qid": 0, 00:22:03.745 "state": "enabled", 00:22:03.745 "thread": "nvmf_tgt_poll_group_000", 00:22:03.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.745 "listen_address": { 00:22:03.745 "trtype": "TCP", 00:22:03.745 "adrfam": "IPv4", 00:22:03.745 "traddr": "10.0.0.2", 00:22:03.745 "trsvcid": "4420" 00:22:03.745 }, 00:22:03.745 "peer_address": { 00:22:03.745 "trtype": "TCP", 00:22:03.745 "adrfam": "IPv4", 00:22:03.745 "traddr": "10.0.0.1", 00:22:03.745 "trsvcid": "33126" 00:22:03.745 }, 00:22:03.745 "auth": { 00:22:03.745 "state": "completed", 00:22:03.745 "digest": "sha512", 00:22:03.745 "dhgroup": "ffdhe8192" 00:22:03.745 } 00:22:03.745 } 00:22:03.745 ]' 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.745 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.003 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:22:04.003 18:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:22:04.937 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.937 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.937 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.937 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.937 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.937 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:04.937 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:04.937 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:04.937 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:04.937 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:04.937 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.503 18:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.437 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.437 { 00:22:06.437 "cntlid": 145, 00:22:06.437 "qid": 0, 00:22:06.437 "state": "enabled", 00:22:06.437 "thread": "nvmf_tgt_poll_group_000", 00:22:06.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.437 "listen_address": { 00:22:06.437 "trtype": "TCP", 00:22:06.437 "adrfam": "IPv4", 00:22:06.437 "traddr": "10.0.0.2", 00:22:06.437 "trsvcid": "4420" 00:22:06.437 }, 00:22:06.437 "peer_address": { 00:22:06.437 "trtype": "TCP", 00:22:06.437 "adrfam": "IPv4", 00:22:06.437 "traddr": "10.0.0.1", 00:22:06.437 "trsvcid": "33160" 00:22:06.437 }, 00:22:06.437 "auth": { 00:22:06.437 "state": "completed", 00:22:06.437 "digest": "sha512", 00:22:06.437 "dhgroup": "ffdhe8192" 00:22:06.437 } 00:22:06.437 } 00:22:06.437 ]' 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.437 18:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.696 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.696 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.696 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.696 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.696 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.955 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:22:06.955 18:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDgxYTRmNjdiZDIyN2FlOGYxMjlhZTczZjFlOTk0MzcxZTNhMTA1Yjk3NzZlYzVlCkV9oQ==: --dhchap-ctrl-secret DHHC-1:03:NDg5MTY1ZTI4NzRjZDU3MTBiNDM1YjQ1NWVjNTE4MjUyYzM1OTkzNjA2MjE3MmNmMjY3MWViZTg2OGUyYWUyZSapUok=: 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:07.941 18:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:08.876 request: 00:22:08.876 { 00:22:08.876 "name": "nvme0", 00:22:08.876 "trtype": "tcp", 00:22:08.876 "traddr": "10.0.0.2", 00:22:08.876 "adrfam": "ipv4", 00:22:08.876 "trsvcid": "4420", 00:22:08.876 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:08.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.876 "prchk_reftag": false, 00:22:08.876 "prchk_guard": false, 00:22:08.876 "hdgst": false, 00:22:08.876 "ddgst": false, 00:22:08.876 "dhchap_key": "key2", 00:22:08.876 "allow_unrecognized_csi": false, 00:22:08.876 "method": "bdev_nvme_attach_controller", 00:22:08.876 "req_id": 1 00:22:08.876 } 00:22:08.876 Got JSON-RPC error response 00:22:08.876 response: 00:22:08.876 { 00:22:08.876 "code": -5, 00:22:08.876 "message": "Input/output error" 00:22:08.876 } 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:08.876 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:09.442 request: 00:22:09.442 { 00:22:09.442 "name": "nvme0", 00:22:09.442 "trtype": "tcp", 00:22:09.442 "traddr": "10.0.0.2", 00:22:09.442 "adrfam": "ipv4", 00:22:09.442 "trsvcid": "4420", 00:22:09.442 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.442 "prchk_reftag": false, 00:22:09.442 "prchk_guard": false, 00:22:09.442 "hdgst": false, 00:22:09.442 "ddgst": false, 00:22:09.442 "dhchap_key": "key1", 00:22:09.442 "dhchap_ctrlr_key": "ckey2", 00:22:09.442 "allow_unrecognized_csi": false, 00:22:09.442 "method": "bdev_nvme_attach_controller", 00:22:09.442 "req_id": 1 00:22:09.442 } 00:22:09.442 Got JSON-RPC error response 00:22:09.442 response: 00:22:09.442 { 00:22:09.442 "code": -5, 00:22:09.442 "message": "Input/output error" 00:22:09.442 } 00:22:09.442 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:09.442 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:09.442 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:09.442 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.443 18:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.375 request: 00:22:10.375 { 00:22:10.375 "name": "nvme0", 00:22:10.375 "trtype": "tcp", 00:22:10.375 "traddr": "10.0.0.2", 00:22:10.375 "adrfam": "ipv4", 00:22:10.375 "trsvcid": "4420", 00:22:10.375 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.375 "prchk_reftag": false, 00:22:10.375 "prchk_guard": false, 00:22:10.375 "hdgst": false, 00:22:10.375 "ddgst": false, 00:22:10.375 "dhchap_key": "key1", 00:22:10.375 "dhchap_ctrlr_key": "ckey1", 00:22:10.375 "allow_unrecognized_csi": false, 00:22:10.375 "method": "bdev_nvme_attach_controller", 00:22:10.375 "req_id": 1 00:22:10.375 } 00:22:10.375 Got JSON-RPC error response 00:22:10.375 response: 00:22:10.375 { 00:22:10.375 "code": -5, 00:22:10.375 "message": "Input/output error" 00:22:10.375 } 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 731917 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 731917 ']' 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 731917 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 731917 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 731917' 00:22:10.375 killing process with pid 731917 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 731917 00:22:10.375 18:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 731917 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=754448 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 754448 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 754448 ']' 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.634 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 754448 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 754448 ']' 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.892 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.151 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.151 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:11.151 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:11.151 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.151 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.409 null0 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Gyz 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.BMl ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BMl 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QvV 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.oFv ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oFv 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3cD 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.zAl ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zAl 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PPY 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.409 18:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.781 nvme0n1 00:22:12.781 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.781 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.781 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.036 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.036 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.036 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.036 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.036 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.036 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.036 { 00:22:13.036 "cntlid": 1, 00:22:13.036 "qid": 0, 00:22:13.036 "state": "enabled", 00:22:13.036 "thread": "nvmf_tgt_poll_group_000", 00:22:13.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.036 "listen_address": { 00:22:13.036 "trtype": "TCP", 00:22:13.036 "adrfam": "IPv4", 00:22:13.036 "traddr": "10.0.0.2", 00:22:13.036 "trsvcid": "4420" 00:22:13.036 }, 00:22:13.036 "peer_address": { 00:22:13.036 "trtype": "TCP", 00:22:13.036 "adrfam": "IPv4", 00:22:13.036 "traddr": "10.0.0.1", 00:22:13.036 "trsvcid": "54216" 00:22:13.036 }, 00:22:13.036 "auth": { 00:22:13.036 "state": "completed", 00:22:13.036 "digest": "sha512", 00:22:13.036 "dhgroup": "ffdhe8192" 00:22:13.036 } 00:22:13.036 } 00:22:13.036 ]' 00:22:13.037 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.037 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.037 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.037 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.037 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.037 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.037 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.037 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.293 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:22:13.293 18:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:22:14.226 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.226 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.226 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.226 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.226 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.226 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:14.226 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.226 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.226 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.226 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:14.226 18:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:14.484 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:14.484 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:14.484 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:14.484 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:14.484 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.484 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:14.484 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.484 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:14.484 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.484 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.050 request: 00:22:15.050 { 00:22:15.050 "name": "nvme0", 00:22:15.050 "trtype": "tcp", 00:22:15.050 "traddr": "10.0.0.2", 00:22:15.050 "adrfam": "ipv4", 00:22:15.050 "trsvcid": "4420", 00:22:15.050 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:15.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.050 "prchk_reftag": false, 00:22:15.050 "prchk_guard": false, 00:22:15.050 "hdgst": false, 00:22:15.050 "ddgst": false, 00:22:15.050 "dhchap_key": "key3", 00:22:15.050 "allow_unrecognized_csi": false, 00:22:15.050 "method": "bdev_nvme_attach_controller", 00:22:15.050 "req_id": 1 00:22:15.050 } 00:22:15.050 Got JSON-RPC error response 00:22:15.050 response: 00:22:15.050 { 00:22:15.050 "code": -5, 00:22:15.050 "message": "Input/output error" 00:22:15.050 } 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:15.050 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.051 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:15.051 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.051 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:15.051 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.051 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.309 request: 00:22:15.309 { 00:22:15.309 "name": "nvme0", 00:22:15.309 "trtype": "tcp", 00:22:15.309 "traddr": "10.0.0.2", 00:22:15.309 "adrfam": "ipv4", 00:22:15.309 "trsvcid": "4420", 00:22:15.309 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:15.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.309 "prchk_reftag": false, 00:22:15.309 "prchk_guard": false, 00:22:15.309 "hdgst": false, 00:22:15.309 "ddgst": false, 00:22:15.309 "dhchap_key": "key3", 00:22:15.309 "allow_unrecognized_csi": false, 00:22:15.309 "method": "bdev_nvme_attach_controller", 00:22:15.309 "req_id": 1 00:22:15.309 } 00:22:15.309 Got JSON-RPC error response 00:22:15.309 response: 00:22:15.309 { 00:22:15.309 "code": -5, 00:22:15.309 "message": "Input/output error" 00:22:15.309 } 00:22:15.567 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:15.567 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:15.567 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:15.567 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:15.567 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:15.567 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:15.567 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:15.567 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.567 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.567 18:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.825 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:16.391 request: 00:22:16.391 { 00:22:16.391 "name": "nvme0", 00:22:16.391 "trtype": "tcp", 00:22:16.391 "traddr": "10.0.0.2", 00:22:16.391 "adrfam": "ipv4", 00:22:16.391 "trsvcid": "4420", 00:22:16.391 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.391 "prchk_reftag": false, 00:22:16.391 "prchk_guard": false, 00:22:16.391 "hdgst": false, 00:22:16.391 "ddgst": false, 00:22:16.391 "dhchap_key": "key0", 00:22:16.391 "dhchap_ctrlr_key": "key1", 00:22:16.391 "allow_unrecognized_csi": false, 00:22:16.391 "method": "bdev_nvme_attach_controller", 00:22:16.391 "req_id": 1 00:22:16.391 } 00:22:16.391 Got JSON-RPC error response 00:22:16.391 response: 00:22:16.391 { 00:22:16.391 "code": -5, 00:22:16.391 "message": "Input/output error" 00:22:16.391 } 00:22:16.391 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:16.391 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:16.391 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:16.391 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:16.391 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:16.391 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:16.391 18:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:16.650 nvme0n1 00:22:16.650 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:16.650 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.650 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:16.908 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.909 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.909 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.167 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:17.167 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.167 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.167 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.167 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:17.167 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:17.167 18:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:18.539 nvme0n1 00:22:18.539 18:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:18.539 18:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:18.539 18:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.797 18:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.797 18:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.797 18:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.797 18:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.797 18:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.797 18:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:18.797 18:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:18.797 18:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.056 18:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.056 18:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:22:19.056 18:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: --dhchap-ctrl-secret DHHC-1:03:OWIzNzE5OWMwYmNlYjM1YWE5OTg4ZTlmYjA1NTM0YmY4N2RlYjgyODdhOWFiNjQ3YzBmZTI5MzU5YjVjZDY5YZlKGFU=: 00:22:19.989 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:19.989 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:19.989 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:19.989 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:19.989 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:19.989 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:19.989 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:19.989 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.989 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.246 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:20.246 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:20.246 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:20.246 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:20.246 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.246 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:20.246 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.246 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:20.246 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:20.246 18:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:21.178 request: 00:22:21.178 { 00:22:21.178 "name": "nvme0", 00:22:21.178 "trtype": "tcp", 00:22:21.178 "traddr": "10.0.0.2", 00:22:21.178 "adrfam": "ipv4", 00:22:21.178 "trsvcid": "4420", 00:22:21.178 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.178 "prchk_reftag": false, 00:22:21.178 "prchk_guard": false, 00:22:21.178 "hdgst": false, 00:22:21.178 "ddgst": false, 00:22:21.178 "dhchap_key": "key1", 00:22:21.178 "allow_unrecognized_csi": false, 00:22:21.178 "method": "bdev_nvme_attach_controller", 00:22:21.178 "req_id": 1 00:22:21.178 } 00:22:21.178 Got JSON-RPC error response 00:22:21.178 response: 00:22:21.178 { 00:22:21.178 "code": -5, 00:22:21.178 "message": "Input/output error" 00:22:21.178 } 00:22:21.178 18:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:21.178 18:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:21.178 18:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:21.178 18:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:21.178 18:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.178 18:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.178 18:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:22.551 nvme0n1 00:22:22.551 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:22.551 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:22.551 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.809 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.809 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.809 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.066 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.066 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.066 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.066 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.066 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:23.066 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:23.066 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:23.638 nvme0n1 00:22:23.638 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:23.638 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.638 18:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:23.895 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.895 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.895 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: '' 2s 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: ]] 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTg0MTE5NTk1MmMzMzhlMDZmYWJlMTgyZjg5YWU2NDOBRJdF: 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:24.152 18:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: 2s 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: ]] 00:22:26.051 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTFiMTQ5NzRlZTcxYWEzMTNmNTU0MmQ1MjE5OGM2NTk0MmJmZWQ1YTk4MzAxZjYxc4AyXQ==: 00:22:26.309 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:26.309 18:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.209 18:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:29.582 nvme0n1 00:22:29.582 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.582 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.582 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.582 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.582 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.582 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:30.515 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:30.515 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:30.515 18:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.773 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.773 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.773 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.773 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.773 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.773 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:30.773 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:31.031 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:31.031 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:31.031 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:31.289 18:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:32.222 request: 00:22:32.222 { 00:22:32.222 "name": "nvme0", 00:22:32.222 "dhchap_key": "key1", 00:22:32.222 "dhchap_ctrlr_key": "key3", 00:22:32.222 "method": "bdev_nvme_set_keys", 00:22:32.222 "req_id": 1 00:22:32.222 } 00:22:32.222 Got JSON-RPC error response 00:22:32.222 response: 00:22:32.222 { 00:22:32.222 "code": -13, 00:22:32.222 "message": "Permission denied" 00:22:32.222 } 00:22:32.222 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:32.222 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:32.222 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:32.222 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:32.222 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:32.222 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:32.222 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.480 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:32.480 18:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:33.422 18:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:33.422 18:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:33.422 18:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.681 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:33.681 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:33.681 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.681 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.681 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.681 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:33.681 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:33.682 18:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:35.109 nvme0n1 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:35.109 18:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:36.068 request: 00:22:36.068 { 00:22:36.068 "name": "nvme0", 00:22:36.068 "dhchap_key": "key2", 00:22:36.068 "dhchap_ctrlr_key": "key0", 00:22:36.068 "method": "bdev_nvme_set_keys", 00:22:36.068 "req_id": 1 00:22:36.068 } 00:22:36.068 Got JSON-RPC error response 00:22:36.069 response: 00:22:36.069 { 00:22:36.069 "code": -13, 00:22:36.069 "message": "Permission denied" 00:22:36.069 } 00:22:36.069 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:36.069 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:36.069 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:36.069 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:36.069 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:36.069 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.069 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:36.326 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:36.326 18:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:37.274 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:37.274 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:37.274 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.531 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 731938 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 731938 ']' 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 731938 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 731938 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 731938' 00:22:37.532 killing process with pid 731938 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 731938 00:22:37.532 18:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 731938 00:22:37.789 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:37.790 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.790 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:37.790 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.790 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:37.790 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.790 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.790 rmmod nvme_tcp 00:22:37.790 rmmod nvme_fabrics 00:22:38.049 rmmod nvme_keyring 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 754448 ']' 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 754448 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 754448 ']' 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 754448 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 754448 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 754448' 00:22:38.049 killing process with pid 754448 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 754448 00:22:38.049 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 754448 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.308 18:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.216 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:40.216 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Gyz /tmp/spdk.key-sha256.QvV /tmp/spdk.key-sha384.3cD /tmp/spdk.key-sha512.PPY /tmp/spdk.key-sha512.BMl /tmp/spdk.key-sha384.oFv /tmp/spdk.key-sha256.zAl '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:40.216 00:22:40.216 real 3m29.312s 00:22:40.216 user 8m10.699s 00:22:40.216 sys 0m27.939s 00:22:40.216 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.216 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.216 ************************************ 00:22:40.216 END TEST nvmf_auth_target 00:22:40.216 ************************************ 00:22:40.216 18:43:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:40.216 18:43:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:40.216 18:43:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:40.216 18:43:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.216 18:43:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:40.216 ************************************ 00:22:40.216 START TEST nvmf_bdevio_no_huge 00:22:40.216 ************************************ 00:22:40.216 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:40.475 * Looking for test storage... 00:22:40.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:40.475 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.476 --rc genhtml_branch_coverage=1 00:22:40.476 --rc genhtml_function_coverage=1 00:22:40.476 --rc genhtml_legend=1 00:22:40.476 --rc geninfo_all_blocks=1 00:22:40.476 --rc geninfo_unexecuted_blocks=1 00:22:40.476 00:22:40.476 ' 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.476 --rc genhtml_branch_coverage=1 00:22:40.476 --rc genhtml_function_coverage=1 00:22:40.476 --rc genhtml_legend=1 00:22:40.476 --rc geninfo_all_blocks=1 00:22:40.476 --rc geninfo_unexecuted_blocks=1 00:22:40.476 00:22:40.476 ' 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.476 --rc genhtml_branch_coverage=1 00:22:40.476 --rc genhtml_function_coverage=1 00:22:40.476 --rc genhtml_legend=1 00:22:40.476 --rc geninfo_all_blocks=1 00:22:40.476 --rc geninfo_unexecuted_blocks=1 00:22:40.476 00:22:40.476 ' 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:40.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.476 --rc genhtml_branch_coverage=1 00:22:40.476 --rc genhtml_function_coverage=1 00:22:40.476 --rc genhtml_legend=1 00:22:40.476 --rc geninfo_all_blocks=1 00:22:40.476 --rc geninfo_unexecuted_blocks=1 00:22:40.476 00:22:40.476 ' 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:40.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.476 18:43:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:43.006 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:43.006 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.006 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:43.007 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:43.007 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:22:43.007 00:22:43.007 --- 10.0.0.2 ping statistics --- 00:22:43.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.007 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:22:43.007 00:22:43.007 --- 10.0.0.1 ping statistics --- 00:22:43.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.007 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=759698 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 759698 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 759698 ']' 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.007 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.007 [2024-11-17 18:43:29.266412] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:22:43.007 [2024-11-17 18:43:29.266489] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:43.007 [2024-11-17 18:43:29.357594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.007 [2024-11-17 18:43:29.426572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.007 [2024-11-17 18:43:29.426639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.007 [2024-11-17 18:43:29.426689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.007 [2024-11-17 18:43:29.426728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.007 [2024-11-17 18:43:29.426747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.007 [2024-11-17 18:43:29.428231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:43.007 [2024-11-17 18:43:29.428299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:43.007 [2024-11-17 18:43:29.428368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:43.007 [2024-11-17 18:43:29.428378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.266 [2024-11-17 18:43:29.660006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.266 Malloc0 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:43.266 [2024-11-17 18:43:29.697987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:43.266 { 00:22:43.266 "params": { 00:22:43.266 "name": "Nvme$subsystem", 00:22:43.266 "trtype": "$TEST_TRANSPORT", 00:22:43.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.266 "adrfam": "ipv4", 00:22:43.266 "trsvcid": "$NVMF_PORT", 00:22:43.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.266 "hdgst": ${hdgst:-false}, 00:22:43.266 "ddgst": ${ddgst:-false} 00:22:43.266 }, 00:22:43.266 "method": "bdev_nvme_attach_controller" 00:22:43.266 } 00:22:43.266 EOF 00:22:43.266 )") 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:43.266 18:43:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:43.266 "params": { 00:22:43.266 "name": "Nvme1", 00:22:43.266 "trtype": "tcp", 00:22:43.266 "traddr": "10.0.0.2", 00:22:43.266 "adrfam": "ipv4", 00:22:43.266 "trsvcid": "4420", 00:22:43.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.266 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.266 "hdgst": false, 00:22:43.266 "ddgst": false 00:22:43.266 }, 00:22:43.266 "method": "bdev_nvme_attach_controller" 00:22:43.266 }' 00:22:43.266 [2024-11-17 18:43:29.746881] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:22:43.266 [2024-11-17 18:43:29.746963] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid759722 ] 00:22:43.266 [2024-11-17 18:43:29.822869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:43.525 [2024-11-17 18:43:29.874285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.525 [2024-11-17 18:43:29.874336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.525 [2024-11-17 18:43:29.874339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.783 I/O targets: 00:22:43.783 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:43.783 00:22:43.783 00:22:43.783 CUnit - A unit testing framework for C - Version 2.1-3 00:22:43.783 http://cunit.sourceforge.net/ 00:22:43.783 00:22:43.783 00:22:43.783 Suite: bdevio tests on: Nvme1n1 00:22:43.783 Test: blockdev write read block ...passed 00:22:43.783 Test: blockdev write zeroes read block ...passed 00:22:43.783 Test: blockdev write zeroes read no split ...passed 00:22:43.783 Test: blockdev write zeroes read split ...passed 00:22:43.783 Test: blockdev write zeroes read split partial ...passed 00:22:43.783 Test: blockdev reset ...[2024-11-17 18:43:30.226524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:43.783 [2024-11-17 18:43:30.226638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6d4b0 (9): Bad file descriptor 00:22:43.783 [2024-11-17 18:43:30.282356] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:43.783 passed 00:22:43.783 Test: blockdev write read 8 blocks ...passed 00:22:43.783 Test: blockdev write read size > 128k ...passed 00:22:43.783 Test: blockdev write read invalid size ...passed 00:22:43.783 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:43.783 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:43.783 Test: blockdev write read max offset ...passed 00:22:44.041 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:44.041 Test: blockdev writev readv 8 blocks ...passed 00:22:44.041 Test: blockdev writev readv 30 x 1block ...passed 00:22:44.041 Test: blockdev writev readv block ...passed 00:22:44.041 Test: blockdev writev readv size > 128k ...passed 00:22:44.041 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:44.041 Test: blockdev comparev and writev ...[2024-11-17 18:43:30.451714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.041 [2024-11-17 18:43:30.451750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.041 [2024-11-17 18:43:30.451776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.041 [2024-11-17 18:43:30.451795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:44.041 [2024-11-17 18:43:30.452131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.041 [2024-11-17 18:43:30.452157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:44.041 [2024-11-17 18:43:30.452179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.041 [2024-11-17 18:43:30.452196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:44.041 [2024-11-17 18:43:30.452519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.041 [2024-11-17 18:43:30.452542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:44.041 [2024-11-17 18:43:30.452564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.041 [2024-11-17 18:43:30.452581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:44.041 [2024-11-17 18:43:30.452921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.041 [2024-11-17 18:43:30.452946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:44.041 [2024-11-17 18:43:30.452968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.041 [2024-11-17 18:43:30.452984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:44.041 passed 00:22:44.041 Test: blockdev nvme passthru rw ...passed 00:22:44.041 Test: blockdev nvme passthru vendor specific ...[2024-11-17 18:43:30.534911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.041 [2024-11-17 18:43:30.534938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.041 [2024-11-17 18:43:30.535074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.041 [2024-11-17 18:43:30.535097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:44.041 [2024-11-17 18:43:30.535232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.041 [2024-11-17 18:43:30.535255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:44.041 [2024-11-17 18:43:30.535385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.041 [2024-11-17 18:43:30.535408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:44.041 passed 00:22:44.041 Test: blockdev nvme admin passthru ...passed 00:22:44.041 Test: blockdev copy ...passed 00:22:44.041 00:22:44.041 Run Summary: Type Total Ran Passed Failed Inactive 00:22:44.041 suites 1 1 n/a 0 0 00:22:44.041 tests 23 23 23 0 0 00:22:44.041 asserts 152 152 152 0 n/a 00:22:44.041 00:22:44.041 Elapsed time = 0.969 seconds 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.608 rmmod nvme_tcp 00:22:44.608 rmmod nvme_fabrics 00:22:44.608 rmmod nvme_keyring 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 759698 ']' 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 759698 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 759698 ']' 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 759698 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.608 18:43:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 759698 00:22:44.608 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:44.608 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:44.608 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 759698' 00:22:44.608 killing process with pid 759698 00:22:44.608 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 759698 00:22:44.608 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 759698 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.867 18:43:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:47.409 00:22:47.409 real 0m6.654s 00:22:47.409 user 0m10.381s 00:22:47.409 sys 0m2.710s 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:47.409 ************************************ 00:22:47.409 END TEST nvmf_bdevio_no_huge 00:22:47.409 ************************************ 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:47.409 ************************************ 00:22:47.409 START TEST nvmf_tls 00:22:47.409 ************************************ 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:47.409 * Looking for test storage... 00:22:47.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.409 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:47.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.410 --rc genhtml_branch_coverage=1 00:22:47.410 --rc genhtml_function_coverage=1 00:22:47.410 --rc genhtml_legend=1 00:22:47.410 --rc geninfo_all_blocks=1 00:22:47.410 --rc geninfo_unexecuted_blocks=1 00:22:47.410 00:22:47.410 ' 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:47.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.410 --rc genhtml_branch_coverage=1 00:22:47.410 --rc genhtml_function_coverage=1 00:22:47.410 --rc genhtml_legend=1 00:22:47.410 --rc geninfo_all_blocks=1 00:22:47.410 --rc geninfo_unexecuted_blocks=1 00:22:47.410 00:22:47.410 ' 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:47.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.410 --rc genhtml_branch_coverage=1 00:22:47.410 --rc genhtml_function_coverage=1 00:22:47.410 --rc genhtml_legend=1 00:22:47.410 --rc geninfo_all_blocks=1 00:22:47.410 --rc geninfo_unexecuted_blocks=1 00:22:47.410 00:22:47.410 ' 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:47.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.410 --rc genhtml_branch_coverage=1 00:22:47.410 --rc genhtml_function_coverage=1 00:22:47.410 --rc genhtml_legend=1 00:22:47.410 --rc geninfo_all_blocks=1 00:22:47.410 --rc geninfo_unexecuted_blocks=1 00:22:47.410 00:22:47.410 ' 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.410 18:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:49.315 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:49.315 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:49.315 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:49.315 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.315 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:22:49.574 00:22:49.574 --- 10.0.0.2 ping statistics --- 00:22:49.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.574 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:22:49.574 00:22:49.574 --- 10.0.0.1 ping statistics --- 00:22:49.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.574 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=761922 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 761922 00:22:49.574 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 761922 ']' 00:22:49.575 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.575 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.575 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.575 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.575 18:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.575 [2024-11-17 18:43:36.002537] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:22:49.575 [2024-11-17 18:43:36.002628] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.575 [2024-11-17 18:43:36.077214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.575 [2024-11-17 18:43:36.121386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.575 [2024-11-17 18:43:36.121460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.575 [2024-11-17 18:43:36.121483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.575 [2024-11-17 18:43:36.121494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.575 [2024-11-17 18:43:36.121503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.575 [2024-11-17 18:43:36.122127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.833 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.833 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:49.833 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.833 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.833 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.833 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.833 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:49.833 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:50.091 true 00:22:50.091 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:50.091 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:50.348 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:50.348 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:50.348 18:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:50.607 18:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:50.607 18:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:50.865 18:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:50.865 18:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:50.865 18:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:51.123 18:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.123 18:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:51.380 18:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:51.380 18:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:51.380 18:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.380 18:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:51.638 18:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:51.638 18:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:51.638 18:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:51.896 18:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.896 18:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:52.154 18:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:52.154 18:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:52.154 18:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:52.412 18:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:52.412 18:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.LWo2UrEfvd 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.APl1JKDoJd 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.LWo2UrEfvd 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.APl1JKDoJd 00:22:52.978 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:53.235 18:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:53.801 18:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.LWo2UrEfvd 00:22:53.801 18:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LWo2UrEfvd 00:22:53.801 18:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:54.059 [2024-11-17 18:43:40.379639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.059 18:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:54.316 18:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:54.575 [2024-11-17 18:43:40.965125] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:54.575 [2024-11-17 18:43:40.965341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.575 18:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:54.833 malloc0 00:22:54.833 18:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:55.090 18:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LWo2UrEfvd 00:22:55.348 18:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:55.605 18:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.LWo2UrEfvd 00:23:07.803 Initializing NVMe Controllers 00:23:07.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.803 Initialization complete. Launching workers. 00:23:07.803 ======================================================== 00:23:07.803 Latency(us) 00:23:07.803 Device Information : IOPS MiB/s Average min max 00:23:07.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8587.47 33.54 7454.79 957.65 8620.44 00:23:07.803 ======================================================== 00:23:07.803 Total : 8587.47 33.54 7454.79 957.65 8620.44 00:23:07.803 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LWo2UrEfvd 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LWo2UrEfvd 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=763818 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 763818 /var/tmp/bdevperf.sock 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 763818 ']' 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.803 [2024-11-17 18:43:52.233064] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:07.803 [2024-11-17 18:43:52.233161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763818 ] 00:23:07.803 [2024-11-17 18:43:52.306170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.803 [2024-11-17 18:43:52.356642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LWo2UrEfvd 00:23:07.803 18:43:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:07.803 [2024-11-17 18:43:53.008442] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.803 TLSTESTn1 00:23:07.803 18:43:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:07.803 Running I/O for 10 seconds... 00:23:08.736 3531.00 IOPS, 13.79 MiB/s [2024-11-17T17:43:56.282Z] 3581.50 IOPS, 13.99 MiB/s [2024-11-17T17:43:57.237Z] 3591.33 IOPS, 14.03 MiB/s [2024-11-17T17:43:58.609Z] 3596.50 IOPS, 14.05 MiB/s [2024-11-17T17:43:59.543Z] 3619.40 IOPS, 14.14 MiB/s [2024-11-17T17:44:00.475Z] 3630.00 IOPS, 14.18 MiB/s [2024-11-17T17:44:01.410Z] 3632.14 IOPS, 14.19 MiB/s [2024-11-17T17:44:02.343Z] 3642.50 IOPS, 14.23 MiB/s [2024-11-17T17:44:03.275Z] 3646.78 IOPS, 14.25 MiB/s [2024-11-17T17:44:03.275Z] 3646.40 IOPS, 14.24 MiB/s 00:23:16.699 Latency(us) 00:23:16.699 [2024-11-17T17:44:03.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.699 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:16.699 Verification LBA range: start 0x0 length 0x2000 00:23:16.699 TLSTESTn1 : 10.02 3652.27 14.27 0.00 0.00 34989.57 6043.88 38641.97 00:23:16.699 [2024-11-17T17:44:03.275Z] =================================================================================================================== 00:23:16.699 [2024-11-17T17:44:03.275Z] Total : 3652.27 14.27 0.00 0.00 34989.57 6043.88 38641.97 00:23:16.699 { 00:23:16.699 "results": [ 00:23:16.699 { 00:23:16.699 "job": "TLSTESTn1", 00:23:16.699 "core_mask": "0x4", 00:23:16.699 "workload": "verify", 00:23:16.699 "status": "finished", 00:23:16.699 "verify_range": { 00:23:16.699 "start": 0, 00:23:16.699 "length": 8192 00:23:16.699 }, 00:23:16.699 "queue_depth": 128, 00:23:16.699 "io_size": 4096, 00:23:16.699 "runtime": 10.01869, 00:23:16.699 "iops": 3652.273900080749, 00:23:16.699 "mibps": 14.266694922190426, 00:23:16.699 "io_failed": 0, 00:23:16.699 "io_timeout": 0, 00:23:16.699 "avg_latency_us": 34989.56941842611, 00:23:16.699 "min_latency_us": 6043.875555555555, 00:23:16.699 "max_latency_us": 38641.96740740741 00:23:16.699 } 00:23:16.699 ], 00:23:16.699 "core_count": 1 00:23:16.699 } 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 763818 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 763818 ']' 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 763818 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 763818 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 763818' 00:23:16.958 killing process with pid 763818 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 763818 00:23:16.958 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.958 00:23:16.958 Latency(us) 00:23:16.958 [2024-11-17T17:44:03.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.958 [2024-11-17T17:44:03.534Z] =================================================================================================================== 00:23:16.958 [2024-11-17T17:44:03.534Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 763818 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.APl1JKDoJd 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.APl1JKDoJd 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.APl1JKDoJd 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.APl1JKDoJd 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=765137 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 765137 /var/tmp/bdevperf.sock 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 765137 ']' 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.958 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.216 [2024-11-17 18:44:03.575040] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:17.216 [2024-11-17 18:44:03.575138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid765137 ] 00:23:17.216 [2024-11-17 18:44:03.642652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.216 [2024-11-17 18:44:03.686516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.474 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.474 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:17.474 18:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.APl1JKDoJd 00:23:17.732 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:17.990 [2024-11-17 18:44:04.341132] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.990 [2024-11-17 18:44:04.347558] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:17.990 [2024-11-17 18:44:04.348327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f56e0 (107): Transport endpoint is not connected 00:23:17.990 [2024-11-17 18:44:04.349304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f56e0 (9): Bad file descriptor 00:23:17.990 [2024-11-17 18:44:04.350305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:17.990 [2024-11-17 18:44:04.350327] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:17.990 [2024-11-17 18:44:04.350340] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:17.990 [2024-11-17 18:44:04.350359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:17.990 request: 00:23:17.990 { 00:23:17.990 "name": "TLSTEST", 00:23:17.990 "trtype": "tcp", 00:23:17.990 "traddr": "10.0.0.2", 00:23:17.990 "adrfam": "ipv4", 00:23:17.990 "trsvcid": "4420", 00:23:17.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.990 "prchk_reftag": false, 00:23:17.990 "prchk_guard": false, 00:23:17.990 "hdgst": false, 00:23:17.990 "ddgst": false, 00:23:17.990 "psk": "key0", 00:23:17.990 "allow_unrecognized_csi": false, 00:23:17.990 "method": "bdev_nvme_attach_controller", 00:23:17.990 "req_id": 1 00:23:17.990 } 00:23:17.990 Got JSON-RPC error response 00:23:17.990 response: 00:23:17.990 { 00:23:17.990 "code": -5, 00:23:17.990 "message": "Input/output error" 00:23:17.990 } 00:23:17.990 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 765137 00:23:17.990 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 765137 ']' 00:23:17.990 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 765137 00:23:17.990 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.990 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.990 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 765137 00:23:17.990 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:17.990 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:17.990 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 765137' 00:23:17.990 killing process with pid 765137 00:23:17.990 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 765137 00:23:17.990 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.990 00:23:17.990 Latency(us) 00:23:17.990 [2024-11-17T17:44:04.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.990 [2024-11-17T17:44:04.566Z] =================================================================================================================== 00:23:17.990 [2024-11-17T17:44:04.566Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.990 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 765137 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LWo2UrEfvd 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LWo2UrEfvd 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LWo2UrEfvd 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LWo2UrEfvd 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=765278 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 765278 /var/tmp/bdevperf.sock 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 765278 ']' 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.248 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.248 [2024-11-17 18:44:04.656550] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:18.248 [2024-11-17 18:44:04.656652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid765278 ] 00:23:18.248 [2024-11-17 18:44:04.727254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.248 [2024-11-17 18:44:04.773403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.506 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.506 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.506 18:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LWo2UrEfvd 00:23:18.764 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:19.022 [2024-11-17 18:44:05.454538] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.022 [2024-11-17 18:44:05.460289] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:19.022 [2024-11-17 18:44:05.460338] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:19.022 [2024-11-17 18:44:05.460406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:19.022 [2024-11-17 18:44:05.460657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b36e0 (107): Transport endpoint is not connected 00:23:19.022 [2024-11-17 18:44:05.461645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b36e0 (9): Bad file descriptor 00:23:19.022 [2024-11-17 18:44:05.462644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:19.022 [2024-11-17 18:44:05.462690] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:19.022 [2024-11-17 18:44:05.462706] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:19.022 [2024-11-17 18:44:05.462739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:19.022 request: 00:23:19.022 { 00:23:19.022 "name": "TLSTEST", 00:23:19.022 "trtype": "tcp", 00:23:19.022 "traddr": "10.0.0.2", 00:23:19.022 "adrfam": "ipv4", 00:23:19.022 "trsvcid": "4420", 00:23:19.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.022 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.022 "prchk_reftag": false, 00:23:19.022 "prchk_guard": false, 00:23:19.022 "hdgst": false, 00:23:19.022 "ddgst": false, 00:23:19.022 "psk": "key0", 00:23:19.022 "allow_unrecognized_csi": false, 00:23:19.022 "method": "bdev_nvme_attach_controller", 00:23:19.022 "req_id": 1 00:23:19.022 } 00:23:19.022 Got JSON-RPC error response 00:23:19.022 response: 00:23:19.022 { 00:23:19.022 "code": -5, 00:23:19.022 "message": "Input/output error" 00:23:19.022 } 00:23:19.022 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 765278 00:23:19.022 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 765278 ']' 00:23:19.022 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 765278 00:23:19.022 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:19.022 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.022 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 765278 00:23:19.022 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:19.022 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:19.022 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 765278' 00:23:19.022 killing process with pid 765278 00:23:19.022 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 765278 00:23:19.022 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.022 00:23:19.022 Latency(us) 00:23:19.022 [2024-11-17T17:44:05.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.022 [2024-11-17T17:44:05.598Z] =================================================================================================================== 00:23:19.022 [2024-11-17T17:44:05.598Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.022 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 765278 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LWo2UrEfvd 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LWo2UrEfvd 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LWo2UrEfvd 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LWo2UrEfvd 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=765424 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.280 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 765424 /var/tmp/bdevperf.sock 00:23:19.281 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 765424 ']' 00:23:19.281 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.281 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.281 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.281 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.281 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.281 [2024-11-17 18:44:05.756302] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:19.281 [2024-11-17 18:44:05.756400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid765424 ] 00:23:19.281 [2024-11-17 18:44:05.823754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.539 [2024-11-17 18:44:05.866970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.539 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.539 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.539 18:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LWo2UrEfvd 00:23:19.797 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.055 [2024-11-17 18:44:06.509382] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.055 [2024-11-17 18:44:06.514945] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.055 [2024-11-17 18:44:06.514986] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:20.055 [2024-11-17 18:44:06.515025] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:20.055 [2024-11-17 18:44:06.515591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b6e0 (107): Transport endpoint is not connected 00:23:20.055 [2024-11-17 18:44:06.516580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b6e0 (9): Bad file descriptor 00:23:20.055 [2024-11-17 18:44:06.517579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:20.055 [2024-11-17 18:44:06.517599] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:20.055 [2024-11-17 18:44:06.517619] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:20.055 [2024-11-17 18:44:06.517637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:20.055 request: 00:23:20.055 { 00:23:20.055 "name": "TLSTEST", 00:23:20.055 "trtype": "tcp", 00:23:20.055 "traddr": "10.0.0.2", 00:23:20.055 "adrfam": "ipv4", 00:23:20.055 "trsvcid": "4420", 00:23:20.055 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.055 "prchk_reftag": false, 00:23:20.055 "prchk_guard": false, 00:23:20.055 "hdgst": false, 00:23:20.055 "ddgst": false, 00:23:20.055 "psk": "key0", 00:23:20.055 "allow_unrecognized_csi": false, 00:23:20.055 "method": "bdev_nvme_attach_controller", 00:23:20.055 "req_id": 1 00:23:20.055 } 00:23:20.055 Got JSON-RPC error response 00:23:20.055 response: 00:23:20.055 { 00:23:20.055 "code": -5, 00:23:20.055 "message": "Input/output error" 00:23:20.055 } 00:23:20.055 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 765424 00:23:20.055 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 765424 ']' 00:23:20.055 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 765424 00:23:20.055 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:20.055 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.055 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 765424 00:23:20.055 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:20.055 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:20.055 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 765424' 00:23:20.055 killing process with pid 765424 00:23:20.055 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 765424 00:23:20.055 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.055 00:23:20.055 Latency(us) 00:23:20.055 [2024-11-17T17:44:06.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.055 [2024-11-17T17:44:06.631Z] =================================================================================================================== 00:23:20.055 [2024-11-17T17:44:06.631Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.055 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 765424 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.313 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=765563 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 765563 /var/tmp/bdevperf.sock 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 765563 ']' 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.314 18:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.314 [2024-11-17 18:44:06.788039] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:20.314 [2024-11-17 18:44:06.788110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid765563 ] 00:23:20.314 [2024-11-17 18:44:06.854345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.572 [2024-11-17 18:44:06.900474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.572 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.572 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.572 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:20.830 [2024-11-17 18:44:07.250043] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:20.830 [2024-11-17 18:44:07.250099] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:20.830 request: 00:23:20.830 { 00:23:20.830 "name": "key0", 00:23:20.830 "path": "", 00:23:20.830 "method": "keyring_file_add_key", 00:23:20.830 "req_id": 1 00:23:20.830 } 00:23:20.830 Got JSON-RPC error response 00:23:20.830 response: 00:23:20.830 { 00:23:20.830 "code": -1, 00:23:20.830 "message": "Operation not permitted" 00:23:20.830 } 00:23:20.830 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.088 [2024-11-17 18:44:07.514853] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.088 [2024-11-17 18:44:07.514898] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:21.088 request: 00:23:21.088 { 00:23:21.088 "name": "TLSTEST", 00:23:21.088 "trtype": "tcp", 00:23:21.088 "traddr": "10.0.0.2", 00:23:21.088 "adrfam": "ipv4", 00:23:21.088 "trsvcid": "4420", 00:23:21.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.088 "prchk_reftag": false, 00:23:21.088 "prchk_guard": false, 00:23:21.088 "hdgst": false, 00:23:21.088 "ddgst": false, 00:23:21.088 "psk": "key0", 00:23:21.088 "allow_unrecognized_csi": false, 00:23:21.088 "method": "bdev_nvme_attach_controller", 00:23:21.088 "req_id": 1 00:23:21.088 } 00:23:21.088 Got JSON-RPC error response 00:23:21.088 response: 00:23:21.088 { 00:23:21.088 "code": -126, 00:23:21.088 "message": "Required key not available" 00:23:21.088 } 00:23:21.088 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 765563 00:23:21.088 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 765563 ']' 00:23:21.088 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 765563 00:23:21.088 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.088 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.088 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 765563 00:23:21.088 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:21.088 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:21.088 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 765563' 00:23:21.088 killing process with pid 765563 00:23:21.088 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 765563 00:23:21.088 Received shutdown signal, test time was about 10.000000 seconds 00:23:21.088 00:23:21.088 Latency(us) 00:23:21.088 [2024-11-17T17:44:07.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.088 [2024-11-17T17:44:07.664Z] =================================================================================================================== 00:23:21.088 [2024-11-17T17:44:07.664Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:21.088 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 765563 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 761922 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 761922 ']' 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 761922 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 761922 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 761922' 00:23:21.346 killing process with pid 761922 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 761922 00:23:21.346 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 761922 00:23:21.604 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.604 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:21.604 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:21.604 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:21.604 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:21.604 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:21.604 18:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:21.604 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.604 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:21.604 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.U3GzqFrrFs 00:23:21.604 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:21.604 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.U3GzqFrrFs 00:23:21.604 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=765717 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 765717 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 765717 ']' 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.605 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.605 [2024-11-17 18:44:08.084963] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:21.605 [2024-11-17 18:44:08.085056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.605 [2024-11-17 18:44:08.157517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.863 [2024-11-17 18:44:08.202742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.863 [2024-11-17 18:44:08.202807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.863 [2024-11-17 18:44:08.202838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.863 [2024-11-17 18:44:08.202851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.863 [2024-11-17 18:44:08.202860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.863 [2024-11-17 18:44:08.203403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.863 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.863 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:21.863 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.863 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.863 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.863 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.863 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.U3GzqFrrFs 00:23:21.863 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.U3GzqFrrFs 00:23:21.863 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:22.121 [2024-11-17 18:44:08.589272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.121 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:22.379 18:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:22.637 [2024-11-17 18:44:09.130768] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.637 [2024-11-17 18:44:09.131046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.637 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:22.894 malloc0 00:23:22.894 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:23.151 18:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.U3GzqFrrFs 00:23:23.717 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U3GzqFrrFs 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.U3GzqFrrFs 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=766002 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 766002 /var/tmp/bdevperf.sock 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 766002 ']' 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.976 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.976 [2024-11-17 18:44:10.352212] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:23.976 [2024-11-17 18:44:10.352280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766002 ] 00:23:23.976 [2024-11-17 18:44:10.419987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.976 [2024-11-17 18:44:10.465771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.233 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.233 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:24.233 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.U3GzqFrrFs 00:23:24.492 18:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.749 [2024-11-17 18:44:11.199499] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.749 TLSTESTn1 00:23:24.749 18:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:25.007 Running I/O for 10 seconds... 00:23:26.871 3111.00 IOPS, 12.15 MiB/s [2024-11-17T17:44:14.822Z] 3236.50 IOPS, 12.64 MiB/s [2024-11-17T17:44:15.755Z] 3260.33 IOPS, 12.74 MiB/s [2024-11-17T17:44:16.689Z] 3276.50 IOPS, 12.80 MiB/s [2024-11-17T17:44:17.624Z] 3287.60 IOPS, 12.84 MiB/s [2024-11-17T17:44:18.558Z] 3297.33 IOPS, 12.88 MiB/s [2024-11-17T17:44:19.490Z] 3299.86 IOPS, 12.89 MiB/s [2024-11-17T17:44:20.423Z] 3311.12 IOPS, 12.93 MiB/s [2024-11-17T17:44:21.795Z] 3311.56 IOPS, 12.94 MiB/s [2024-11-17T17:44:21.795Z] 3314.20 IOPS, 12.95 MiB/s 00:23:35.219 Latency(us) 00:23:35.219 [2024-11-17T17:44:21.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.219 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:35.219 Verification LBA range: start 0x0 length 0x2000 00:23:35.219 TLSTESTn1 : 10.03 3318.44 12.96 0.00 0.00 38498.65 9126.49 48156.82 00:23:35.219 [2024-11-17T17:44:21.795Z] =================================================================================================================== 00:23:35.219 [2024-11-17T17:44:21.795Z] Total : 3318.44 12.96 0.00 0.00 38498.65 9126.49 48156.82 00:23:35.219 { 00:23:35.219 "results": [ 00:23:35.219 { 00:23:35.219 "job": "TLSTESTn1", 00:23:35.219 "core_mask": "0x4", 00:23:35.219 "workload": "verify", 00:23:35.219 "status": "finished", 00:23:35.219 "verify_range": { 00:23:35.219 "start": 0, 00:23:35.219 "length": 8192 00:23:35.219 }, 00:23:35.219 "queue_depth": 128, 00:23:35.219 "io_size": 4096, 00:23:35.219 "runtime": 10.025507, 00:23:35.219 "iops": 3318.4356661463603, 00:23:35.219 "mibps": 12.96263932088422, 00:23:35.219 "io_failed": 0, 00:23:35.219 "io_timeout": 0, 00:23:35.219 "avg_latency_us": 38498.65339411731, 00:23:35.219 "min_latency_us": 9126.494814814814, 00:23:35.219 "max_latency_us": 48156.8237037037 00:23:35.219 } 00:23:35.219 ], 00:23:35.219 "core_count": 1 00:23:35.219 } 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 766002 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 766002 ']' 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 766002 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766002 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766002' 00:23:35.219 killing process with pid 766002 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 766002 00:23:35.219 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.219 00:23:35.219 Latency(us) 00:23:35.219 [2024-11-17T17:44:21.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.219 [2024-11-17T17:44:21.795Z] =================================================================================================================== 00:23:35.219 [2024-11-17T17:44:21.795Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 766002 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.U3GzqFrrFs 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U3GzqFrrFs 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U3GzqFrrFs 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U3GzqFrrFs 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.U3GzqFrrFs 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=767317 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 767317 /var/tmp/bdevperf.sock 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 767317 ']' 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.219 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.220 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.220 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.220 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.220 [2024-11-17 18:44:21.741855] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:35.220 [2024-11-17 18:44:21.741961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid767317 ] 00:23:35.477 [2024-11-17 18:44:21.809334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.477 [2024-11-17 18:44:21.852447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.477 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.477 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:35.477 18:44:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.U3GzqFrrFs 00:23:35.734 [2024-11-17 18:44:22.207038] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.U3GzqFrrFs': 0100666 00:23:35.734 [2024-11-17 18:44:22.207081] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:35.734 request: 00:23:35.734 { 00:23:35.734 "name": "key0", 00:23:35.734 "path": "/tmp/tmp.U3GzqFrrFs", 00:23:35.734 "method": "keyring_file_add_key", 00:23:35.734 "req_id": 1 00:23:35.734 } 00:23:35.734 Got JSON-RPC error response 00:23:35.734 response: 00:23:35.735 { 00:23:35.735 "code": -1, 00:23:35.735 "message": "Operation not permitted" 00:23:35.735 } 00:23:35.735 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.992 [2024-11-17 18:44:22.495932] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.992 [2024-11-17 18:44:22.496007] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:35.992 request: 00:23:35.992 { 00:23:35.992 "name": "TLSTEST", 00:23:35.992 "trtype": "tcp", 00:23:35.992 "traddr": "10.0.0.2", 00:23:35.992 "adrfam": "ipv4", 00:23:35.992 "trsvcid": "4420", 00:23:35.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.992 "prchk_reftag": false, 00:23:35.992 "prchk_guard": false, 00:23:35.992 "hdgst": false, 00:23:35.992 "ddgst": false, 00:23:35.992 "psk": "key0", 00:23:35.992 "allow_unrecognized_csi": false, 00:23:35.992 "method": "bdev_nvme_attach_controller", 00:23:35.992 "req_id": 1 00:23:35.992 } 00:23:35.992 Got JSON-RPC error response 00:23:35.992 response: 00:23:35.992 { 00:23:35.992 "code": -126, 00:23:35.992 "message": "Required key not available" 00:23:35.992 } 00:23:35.992 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 767317 00:23:35.992 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 767317 ']' 00:23:35.992 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 767317 00:23:35.992 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:35.992 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.992 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 767317 00:23:35.993 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:35.993 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:35.993 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 767317' 00:23:35.993 killing process with pid 767317 00:23:35.993 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 767317 00:23:35.993 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.993 00:23:35.993 Latency(us) 00:23:35.993 [2024-11-17T17:44:22.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.993 [2024-11-17T17:44:22.569Z] =================================================================================================================== 00:23:35.993 [2024-11-17T17:44:22.569Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.993 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 767317 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 765717 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 765717 ']' 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 765717 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 765717 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 765717' 00:23:36.250 killing process with pid 765717 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 765717 00:23:36.250 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 765717 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=767472 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 767472 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 767472 ']' 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.508 18:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.508 [2024-11-17 18:44:22.992596] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:36.508 [2024-11-17 18:44:22.992717] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.508 [2024-11-17 18:44:23.063957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.766 [2024-11-17 18:44:23.112394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.766 [2024-11-17 18:44:23.112453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.766 [2024-11-17 18:44:23.112481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.766 [2024-11-17 18:44:23.112492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.766 [2024-11-17 18:44:23.112502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.766 [2024-11-17 18:44:23.113128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.U3GzqFrrFs 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.U3GzqFrrFs 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.U3GzqFrrFs 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.U3GzqFrrFs 00:23:36.766 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.024 [2024-11-17 18:44:23.498726] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.024 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.282 18:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:37.540 [2024-11-17 18:44:24.052259] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.540 [2024-11-17 18:44:24.052492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.540 18:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.798 malloc0 00:23:37.798 18:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:38.056 18:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.U3GzqFrrFs 00:23:38.313 [2024-11-17 18:44:24.866085] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.U3GzqFrrFs': 0100666 00:23:38.313 [2024-11-17 18:44:24.866131] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:38.313 request: 00:23:38.313 { 00:23:38.313 "name": "key0", 00:23:38.313 "path": "/tmp/tmp.U3GzqFrrFs", 00:23:38.313 "method": "keyring_file_add_key", 00:23:38.313 "req_id": 1 00:23:38.313 } 00:23:38.313 Got JSON-RPC error response 00:23:38.313 response: 00:23:38.313 { 00:23:38.313 "code": -1, 00:23:38.313 "message": "Operation not permitted" 00:23:38.313 } 00:23:38.313 18:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.881 [2024-11-17 18:44:25.154887] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:38.881 [2024-11-17 18:44:25.154964] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:38.881 request: 00:23:38.881 { 00:23:38.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.881 "host": "nqn.2016-06.io.spdk:host1", 00:23:38.881 "psk": "key0", 00:23:38.881 "method": "nvmf_subsystem_add_host", 00:23:38.881 "req_id": 1 00:23:38.881 } 00:23:38.881 Got JSON-RPC error response 00:23:38.881 response: 00:23:38.881 { 00:23:38.881 "code": -32603, 00:23:38.881 "message": "Internal error" 00:23:38.881 } 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 767472 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 767472 ']' 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 767472 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 767472 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 767472' 00:23:38.881 killing process with pid 767472 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 767472 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 767472 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.U3GzqFrrFs 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=767765 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 767765 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 767765 ']' 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.881 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.140 [2024-11-17 18:44:25.493782] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:39.140 [2024-11-17 18:44:25.493901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.140 [2024-11-17 18:44:25.567254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.140 [2024-11-17 18:44:25.606303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.140 [2024-11-17 18:44:25.606365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.140 [2024-11-17 18:44:25.606386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.140 [2024-11-17 18:44:25.606397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.140 [2024-11-17 18:44:25.606407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.140 [2024-11-17 18:44:25.606958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.140 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.140 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:39.140 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.140 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.140 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.398 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.398 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.U3GzqFrrFs 00:23:39.398 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.U3GzqFrrFs 00:23:39.398 18:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:39.656 [2024-11-17 18:44:26.001413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.656 18:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:39.914 18:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:40.172 [2024-11-17 18:44:26.538904] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.172 [2024-11-17 18:44:26.539174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.172 18:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:40.431 malloc0 00:23:40.431 18:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:40.689 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.U3GzqFrrFs 00:23:40.948 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:41.206 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=768054 00:23:41.206 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:41.206 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:41.206 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 768054 /var/tmp/bdevperf.sock 00:23:41.206 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 768054 ']' 00:23:41.206 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.206 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.206 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.206 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.206 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.206 [2024-11-17 18:44:27.682804] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:41.206 [2024-11-17 18:44:27.682875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768054 ] 00:23:41.206 [2024-11-17 18:44:27.749017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.465 [2024-11-17 18:44:27.795020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.465 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.465 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.465 18:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.U3GzqFrrFs 00:23:41.723 18:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:41.981 [2024-11-17 18:44:28.429515] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.981 TLSTESTn1 00:23:41.981 18:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:42.547 18:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:42.547 "subsystems": [ 00:23:42.547 { 00:23:42.547 "subsystem": "keyring", 00:23:42.547 "config": [ 00:23:42.547 { 00:23:42.547 "method": "keyring_file_add_key", 00:23:42.547 "params": { 00:23:42.547 "name": "key0", 00:23:42.547 "path": "/tmp/tmp.U3GzqFrrFs" 00:23:42.547 } 00:23:42.547 } 00:23:42.547 ] 00:23:42.547 }, 00:23:42.547 { 00:23:42.547 "subsystem": "iobuf", 00:23:42.547 "config": [ 00:23:42.547 { 00:23:42.547 "method": "iobuf_set_options", 00:23:42.547 "params": { 00:23:42.547 "small_pool_count": 8192, 00:23:42.547 "large_pool_count": 1024, 00:23:42.547 "small_bufsize": 8192, 00:23:42.547 "large_bufsize": 135168, 00:23:42.547 "enable_numa": false 00:23:42.547 } 00:23:42.547 } 00:23:42.547 ] 00:23:42.547 }, 00:23:42.547 { 00:23:42.547 "subsystem": "sock", 00:23:42.547 "config": [ 00:23:42.547 { 00:23:42.547 "method": "sock_set_default_impl", 00:23:42.547 "params": { 00:23:42.547 "impl_name": "posix" 00:23:42.547 } 00:23:42.547 }, 00:23:42.547 { 00:23:42.547 "method": "sock_impl_set_options", 00:23:42.547 "params": { 00:23:42.547 "impl_name": "ssl", 00:23:42.547 "recv_buf_size": 4096, 00:23:42.548 "send_buf_size": 4096, 00:23:42.548 "enable_recv_pipe": true, 00:23:42.548 "enable_quickack": false, 00:23:42.548 "enable_placement_id": 0, 00:23:42.548 "enable_zerocopy_send_server": true, 00:23:42.548 "enable_zerocopy_send_client": false, 00:23:42.548 "zerocopy_threshold": 0, 00:23:42.548 "tls_version": 0, 00:23:42.548 "enable_ktls": false 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "sock_impl_set_options", 00:23:42.548 "params": { 00:23:42.548 "impl_name": "posix", 00:23:42.548 "recv_buf_size": 2097152, 00:23:42.548 "send_buf_size": 2097152, 00:23:42.548 "enable_recv_pipe": true, 00:23:42.548 "enable_quickack": false, 00:23:42.548 "enable_placement_id": 0, 00:23:42.548 "enable_zerocopy_send_server": true, 00:23:42.548 "enable_zerocopy_send_client": false, 00:23:42.548 "zerocopy_threshold": 0, 00:23:42.548 "tls_version": 0, 00:23:42.548 "enable_ktls": false 00:23:42.548 } 00:23:42.548 } 00:23:42.548 ] 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "subsystem": "vmd", 00:23:42.548 "config": [] 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "subsystem": "accel", 00:23:42.548 "config": [ 00:23:42.548 { 00:23:42.548 "method": "accel_set_options", 00:23:42.548 "params": { 00:23:42.548 "small_cache_size": 128, 00:23:42.548 "large_cache_size": 16, 00:23:42.548 "task_count": 2048, 00:23:42.548 "sequence_count": 2048, 00:23:42.548 "buf_count": 2048 00:23:42.548 } 00:23:42.548 } 00:23:42.548 ] 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "subsystem": "bdev", 00:23:42.548 "config": [ 00:23:42.548 { 00:23:42.548 "method": "bdev_set_options", 00:23:42.548 "params": { 00:23:42.548 "bdev_io_pool_size": 65535, 00:23:42.548 "bdev_io_cache_size": 256, 00:23:42.548 "bdev_auto_examine": true, 00:23:42.548 "iobuf_small_cache_size": 128, 00:23:42.548 "iobuf_large_cache_size": 16 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "bdev_raid_set_options", 00:23:42.548 "params": { 00:23:42.548 "process_window_size_kb": 1024, 00:23:42.548 "process_max_bandwidth_mb_sec": 0 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "bdev_iscsi_set_options", 00:23:42.548 "params": { 00:23:42.548 "timeout_sec": 30 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "bdev_nvme_set_options", 00:23:42.548 "params": { 00:23:42.548 "action_on_timeout": "none", 00:23:42.548 "timeout_us": 0, 00:23:42.548 "timeout_admin_us": 0, 00:23:42.548 "keep_alive_timeout_ms": 10000, 00:23:42.548 "arbitration_burst": 0, 00:23:42.548 "low_priority_weight": 0, 00:23:42.548 "medium_priority_weight": 0, 00:23:42.548 "high_priority_weight": 0, 00:23:42.548 "nvme_adminq_poll_period_us": 10000, 00:23:42.548 "nvme_ioq_poll_period_us": 0, 00:23:42.548 "io_queue_requests": 0, 00:23:42.548 "delay_cmd_submit": true, 00:23:42.548 "transport_retry_count": 4, 00:23:42.548 "bdev_retry_count": 3, 00:23:42.548 "transport_ack_timeout": 0, 00:23:42.548 "ctrlr_loss_timeout_sec": 0, 00:23:42.548 "reconnect_delay_sec": 0, 00:23:42.548 "fast_io_fail_timeout_sec": 0, 00:23:42.548 "disable_auto_failback": false, 00:23:42.548 "generate_uuids": false, 00:23:42.548 "transport_tos": 0, 00:23:42.548 "nvme_error_stat": false, 00:23:42.548 "rdma_srq_size": 0, 00:23:42.548 "io_path_stat": false, 00:23:42.548 "allow_accel_sequence": false, 00:23:42.548 "rdma_max_cq_size": 0, 00:23:42.548 "rdma_cm_event_timeout_ms": 0, 00:23:42.548 "dhchap_digests": [ 00:23:42.548 "sha256", 00:23:42.548 "sha384", 00:23:42.548 "sha512" 00:23:42.548 ], 00:23:42.548 "dhchap_dhgroups": [ 00:23:42.548 "null", 00:23:42.548 "ffdhe2048", 00:23:42.548 "ffdhe3072", 00:23:42.548 "ffdhe4096", 00:23:42.548 "ffdhe6144", 00:23:42.548 "ffdhe8192" 00:23:42.548 ] 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "bdev_nvme_set_hotplug", 00:23:42.548 "params": { 00:23:42.548 "period_us": 100000, 00:23:42.548 "enable": false 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "bdev_malloc_create", 00:23:42.548 "params": { 00:23:42.548 "name": "malloc0", 00:23:42.548 "num_blocks": 8192, 00:23:42.548 "block_size": 4096, 00:23:42.548 "physical_block_size": 4096, 00:23:42.548 "uuid": "dfca0b4a-a218-4ee5-8e14-c7c5ce04c854", 00:23:42.548 "optimal_io_boundary": 0, 00:23:42.548 "md_size": 0, 00:23:42.548 "dif_type": 0, 00:23:42.548 "dif_is_head_of_md": false, 00:23:42.548 "dif_pi_format": 0 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "bdev_wait_for_examine" 00:23:42.548 } 00:23:42.548 ] 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "subsystem": "nbd", 00:23:42.548 "config": [] 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "subsystem": "scheduler", 00:23:42.548 "config": [ 00:23:42.548 { 00:23:42.548 "method": "framework_set_scheduler", 00:23:42.548 "params": { 00:23:42.548 "name": "static" 00:23:42.548 } 00:23:42.548 } 00:23:42.548 ] 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "subsystem": "nvmf", 00:23:42.548 "config": [ 00:23:42.548 { 00:23:42.548 "method": "nvmf_set_config", 00:23:42.548 "params": { 00:23:42.548 "discovery_filter": "match_any", 00:23:42.548 "admin_cmd_passthru": { 00:23:42.548 "identify_ctrlr": false 00:23:42.548 }, 00:23:42.548 "dhchap_digests": [ 00:23:42.548 "sha256", 00:23:42.548 "sha384", 00:23:42.548 "sha512" 00:23:42.548 ], 00:23:42.548 "dhchap_dhgroups": [ 00:23:42.548 "null", 00:23:42.548 "ffdhe2048", 00:23:42.548 "ffdhe3072", 00:23:42.548 "ffdhe4096", 00:23:42.548 "ffdhe6144", 00:23:42.548 "ffdhe8192" 00:23:42.548 ] 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "nvmf_set_max_subsystems", 00:23:42.548 "params": { 00:23:42.548 "max_subsystems": 1024 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "nvmf_set_crdt", 00:23:42.548 "params": { 00:23:42.548 "crdt1": 0, 00:23:42.548 "crdt2": 0, 00:23:42.548 "crdt3": 0 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "nvmf_create_transport", 00:23:42.548 "params": { 00:23:42.548 "trtype": "TCP", 00:23:42.548 "max_queue_depth": 128, 00:23:42.548 "max_io_qpairs_per_ctrlr": 127, 00:23:42.548 "in_capsule_data_size": 4096, 00:23:42.548 "max_io_size": 131072, 00:23:42.548 "io_unit_size": 131072, 00:23:42.548 "max_aq_depth": 128, 00:23:42.548 "num_shared_buffers": 511, 00:23:42.548 "buf_cache_size": 4294967295, 00:23:42.548 "dif_insert_or_strip": false, 00:23:42.548 "zcopy": false, 00:23:42.548 "c2h_success": false, 00:23:42.548 "sock_priority": 0, 00:23:42.548 "abort_timeout_sec": 1, 00:23:42.548 "ack_timeout": 0, 00:23:42.548 "data_wr_pool_size": 0 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "nvmf_create_subsystem", 00:23:42.548 "params": { 00:23:42.548 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.548 "allow_any_host": false, 00:23:42.548 "serial_number": "SPDK00000000000001", 00:23:42.548 "model_number": "SPDK bdev Controller", 00:23:42.548 "max_namespaces": 10, 00:23:42.548 "min_cntlid": 1, 00:23:42.548 "max_cntlid": 65519, 00:23:42.548 "ana_reporting": false 00:23:42.548 } 00:23:42.548 }, 00:23:42.548 { 00:23:42.548 "method": "nvmf_subsystem_add_host", 00:23:42.548 "params": { 00:23:42.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.549 "host": "nqn.2016-06.io.spdk:host1", 00:23:42.549 "psk": "key0" 00:23:42.549 } 00:23:42.549 }, 00:23:42.549 { 00:23:42.549 "method": "nvmf_subsystem_add_ns", 00:23:42.549 "params": { 00:23:42.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.549 "namespace": { 00:23:42.549 "nsid": 1, 00:23:42.549 "bdev_name": "malloc0", 00:23:42.549 "nguid": "DFCA0B4AA2184EE58E14C7C5CE04C854", 00:23:42.549 "uuid": "dfca0b4a-a218-4ee5-8e14-c7c5ce04c854", 00:23:42.549 "no_auto_visible": false 00:23:42.549 } 00:23:42.549 } 00:23:42.549 }, 00:23:42.549 { 00:23:42.549 "method": "nvmf_subsystem_add_listener", 00:23:42.549 "params": { 00:23:42.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.549 "listen_address": { 00:23:42.549 "trtype": "TCP", 00:23:42.549 "adrfam": "IPv4", 00:23:42.549 "traddr": "10.0.0.2", 00:23:42.549 "trsvcid": "4420" 00:23:42.549 }, 00:23:42.549 "secure_channel": true 00:23:42.549 } 00:23:42.549 } 00:23:42.549 ] 00:23:42.549 } 00:23:42.549 ] 00:23:42.549 }' 00:23:42.549 18:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:42.808 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:42.808 "subsystems": [ 00:23:42.808 { 00:23:42.808 "subsystem": "keyring", 00:23:42.808 "config": [ 00:23:42.808 { 00:23:42.808 "method": "keyring_file_add_key", 00:23:42.808 "params": { 00:23:42.808 "name": "key0", 00:23:42.808 "path": "/tmp/tmp.U3GzqFrrFs" 00:23:42.808 } 00:23:42.808 } 00:23:42.808 ] 00:23:42.808 }, 00:23:42.808 { 00:23:42.808 "subsystem": "iobuf", 00:23:42.808 "config": [ 00:23:42.808 { 00:23:42.808 "method": "iobuf_set_options", 00:23:42.808 "params": { 00:23:42.808 "small_pool_count": 8192, 00:23:42.808 "large_pool_count": 1024, 00:23:42.808 "small_bufsize": 8192, 00:23:42.808 "large_bufsize": 135168, 00:23:42.808 "enable_numa": false 00:23:42.808 } 00:23:42.808 } 00:23:42.808 ] 00:23:42.808 }, 00:23:42.808 { 00:23:42.808 "subsystem": "sock", 00:23:42.808 "config": [ 00:23:42.808 { 00:23:42.808 "method": "sock_set_default_impl", 00:23:42.808 "params": { 00:23:42.808 "impl_name": "posix" 00:23:42.808 } 00:23:42.808 }, 00:23:42.808 { 00:23:42.808 "method": "sock_impl_set_options", 00:23:42.808 "params": { 00:23:42.808 "impl_name": "ssl", 00:23:42.808 "recv_buf_size": 4096, 00:23:42.808 "send_buf_size": 4096, 00:23:42.808 "enable_recv_pipe": true, 00:23:42.808 "enable_quickack": false, 00:23:42.808 "enable_placement_id": 0, 00:23:42.808 "enable_zerocopy_send_server": true, 00:23:42.808 "enable_zerocopy_send_client": false, 00:23:42.808 "zerocopy_threshold": 0, 00:23:42.808 "tls_version": 0, 00:23:42.808 "enable_ktls": false 00:23:42.808 } 00:23:42.808 }, 00:23:42.808 { 00:23:42.808 "method": "sock_impl_set_options", 00:23:42.808 "params": { 00:23:42.808 "impl_name": "posix", 00:23:42.808 "recv_buf_size": 2097152, 00:23:42.808 "send_buf_size": 2097152, 00:23:42.808 "enable_recv_pipe": true, 00:23:42.808 "enable_quickack": false, 00:23:42.808 "enable_placement_id": 0, 00:23:42.808 "enable_zerocopy_send_server": true, 00:23:42.808 "enable_zerocopy_send_client": false, 00:23:42.808 "zerocopy_threshold": 0, 00:23:42.808 "tls_version": 0, 00:23:42.808 "enable_ktls": false 00:23:42.808 } 00:23:42.808 } 00:23:42.808 ] 00:23:42.808 }, 00:23:42.808 { 00:23:42.808 "subsystem": "vmd", 00:23:42.808 "config": [] 00:23:42.808 }, 00:23:42.808 { 00:23:42.808 "subsystem": "accel", 00:23:42.808 "config": [ 00:23:42.808 { 00:23:42.808 "method": "accel_set_options", 00:23:42.808 "params": { 00:23:42.808 "small_cache_size": 128, 00:23:42.808 "large_cache_size": 16, 00:23:42.808 "task_count": 2048, 00:23:42.808 "sequence_count": 2048, 00:23:42.808 "buf_count": 2048 00:23:42.808 } 00:23:42.808 } 00:23:42.808 ] 00:23:42.808 }, 00:23:42.808 { 00:23:42.808 "subsystem": "bdev", 00:23:42.808 "config": [ 00:23:42.808 { 00:23:42.808 "method": "bdev_set_options", 00:23:42.808 "params": { 00:23:42.808 "bdev_io_pool_size": 65535, 00:23:42.808 "bdev_io_cache_size": 256, 00:23:42.808 "bdev_auto_examine": true, 00:23:42.808 "iobuf_small_cache_size": 128, 00:23:42.808 "iobuf_large_cache_size": 16 00:23:42.808 } 00:23:42.808 }, 00:23:42.808 { 00:23:42.808 "method": "bdev_raid_set_options", 00:23:42.808 "params": { 00:23:42.808 "process_window_size_kb": 1024, 00:23:42.808 "process_max_bandwidth_mb_sec": 0 00:23:42.808 } 00:23:42.808 }, 00:23:42.808 { 00:23:42.808 "method": "bdev_iscsi_set_options", 00:23:42.808 "params": { 00:23:42.808 "timeout_sec": 30 00:23:42.808 } 00:23:42.808 }, 00:23:42.808 { 00:23:42.808 "method": "bdev_nvme_set_options", 00:23:42.808 "params": { 00:23:42.808 "action_on_timeout": "none", 00:23:42.808 "timeout_us": 0, 00:23:42.808 "timeout_admin_us": 0, 00:23:42.808 "keep_alive_timeout_ms": 10000, 00:23:42.808 "arbitration_burst": 0, 00:23:42.808 "low_priority_weight": 0, 00:23:42.808 "medium_priority_weight": 0, 00:23:42.808 "high_priority_weight": 0, 00:23:42.808 "nvme_adminq_poll_period_us": 10000, 00:23:42.808 "nvme_ioq_poll_period_us": 0, 00:23:42.808 "io_queue_requests": 512, 00:23:42.808 "delay_cmd_submit": true, 00:23:42.808 "transport_retry_count": 4, 00:23:42.808 "bdev_retry_count": 3, 00:23:42.808 "transport_ack_timeout": 0, 00:23:42.808 "ctrlr_loss_timeout_sec": 0, 00:23:42.808 "reconnect_delay_sec": 0, 00:23:42.808 "fast_io_fail_timeout_sec": 0, 00:23:42.808 "disable_auto_failback": false, 00:23:42.808 "generate_uuids": false, 00:23:42.808 "transport_tos": 0, 00:23:42.808 "nvme_error_stat": false, 00:23:42.808 "rdma_srq_size": 0, 00:23:42.808 "io_path_stat": false, 00:23:42.808 "allow_accel_sequence": false, 00:23:42.808 "rdma_max_cq_size": 0, 00:23:42.808 "rdma_cm_event_timeout_ms": 0, 00:23:42.808 "dhchap_digests": [ 00:23:42.808 "sha256", 00:23:42.808 "sha384", 00:23:42.808 "sha512" 00:23:42.809 ], 00:23:42.809 "dhchap_dhgroups": [ 00:23:42.809 "null", 00:23:42.809 "ffdhe2048", 00:23:42.809 "ffdhe3072", 00:23:42.809 "ffdhe4096", 00:23:42.809 "ffdhe6144", 00:23:42.809 "ffdhe8192" 00:23:42.809 ] 00:23:42.809 } 00:23:42.809 }, 00:23:42.809 { 00:23:42.809 "method": "bdev_nvme_attach_controller", 00:23:42.809 "params": { 00:23:42.809 "name": "TLSTEST", 00:23:42.809 "trtype": "TCP", 00:23:42.809 "adrfam": "IPv4", 00:23:42.809 "traddr": "10.0.0.2", 00:23:42.809 "trsvcid": "4420", 00:23:42.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.809 "prchk_reftag": false, 00:23:42.809 "prchk_guard": false, 00:23:42.809 "ctrlr_loss_timeout_sec": 0, 00:23:42.809 "reconnect_delay_sec": 0, 00:23:42.809 "fast_io_fail_timeout_sec": 0, 00:23:42.809 "psk": "key0", 00:23:42.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.809 "hdgst": false, 00:23:42.809 "ddgst": false, 00:23:42.809 "multipath": "multipath" 00:23:42.809 } 00:23:42.809 }, 00:23:42.809 { 00:23:42.809 "method": "bdev_nvme_set_hotplug", 00:23:42.809 "params": { 00:23:42.809 "period_us": 100000, 00:23:42.809 "enable": false 00:23:42.809 } 00:23:42.809 }, 00:23:42.809 { 00:23:42.809 "method": "bdev_wait_for_examine" 00:23:42.809 } 00:23:42.809 ] 00:23:42.809 }, 00:23:42.809 { 00:23:42.809 "subsystem": "nbd", 00:23:42.809 "config": [] 00:23:42.809 } 00:23:42.809 ] 00:23:42.809 }' 00:23:42.809 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 768054 00:23:42.809 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 768054 ']' 00:23:42.809 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 768054 00:23:42.809 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:42.809 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.809 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768054 00:23:42.809 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:42.809 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:42.809 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768054' 00:23:42.809 killing process with pid 768054 00:23:42.809 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 768054 00:23:42.809 Received shutdown signal, test time was about 10.000000 seconds 00:23:42.809 00:23:42.809 Latency(us) 00:23:42.809 [2024-11-17T17:44:29.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.809 [2024-11-17T17:44:29.385Z] =================================================================================================================== 00:23:42.809 [2024-11-17T17:44:29.385Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:42.809 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 768054 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 767765 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 767765 ']' 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 767765 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 767765 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 767765' 00:23:43.066 killing process with pid 767765 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 767765 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 767765 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.066 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:43.066 "subsystems": [ 00:23:43.066 { 00:23:43.066 "subsystem": "keyring", 00:23:43.066 "config": [ 00:23:43.066 { 00:23:43.066 "method": "keyring_file_add_key", 00:23:43.066 "params": { 00:23:43.066 "name": "key0", 00:23:43.066 "path": "/tmp/tmp.U3GzqFrrFs" 00:23:43.066 } 00:23:43.066 } 00:23:43.066 ] 00:23:43.066 }, 00:23:43.066 { 00:23:43.066 "subsystem": "iobuf", 00:23:43.066 "config": [ 00:23:43.066 { 00:23:43.066 "method": "iobuf_set_options", 00:23:43.066 "params": { 00:23:43.066 "small_pool_count": 8192, 00:23:43.066 "large_pool_count": 1024, 00:23:43.066 "small_bufsize": 8192, 00:23:43.066 "large_bufsize": 135168, 00:23:43.066 "enable_numa": false 00:23:43.066 } 00:23:43.066 } 00:23:43.066 ] 00:23:43.066 }, 00:23:43.066 { 00:23:43.066 "subsystem": "sock", 00:23:43.066 "config": [ 00:23:43.066 { 00:23:43.066 "method": "sock_set_default_impl", 00:23:43.066 "params": { 00:23:43.066 "impl_name": "posix" 00:23:43.066 } 00:23:43.066 }, 00:23:43.066 { 00:23:43.066 "method": "sock_impl_set_options", 00:23:43.066 "params": { 00:23:43.066 "impl_name": "ssl", 00:23:43.066 "recv_buf_size": 4096, 00:23:43.066 "send_buf_size": 4096, 00:23:43.066 "enable_recv_pipe": true, 00:23:43.066 "enable_quickack": false, 00:23:43.066 "enable_placement_id": 0, 00:23:43.066 "enable_zerocopy_send_server": true, 00:23:43.066 "enable_zerocopy_send_client": false, 00:23:43.066 "zerocopy_threshold": 0, 00:23:43.066 "tls_version": 0, 00:23:43.066 "enable_ktls": false 00:23:43.066 } 00:23:43.066 }, 00:23:43.066 { 00:23:43.066 "method": "sock_impl_set_options", 00:23:43.066 "params": { 00:23:43.066 "impl_name": "posix", 00:23:43.066 "recv_buf_size": 2097152, 00:23:43.066 "send_buf_size": 2097152, 00:23:43.066 "enable_recv_pipe": true, 00:23:43.066 "enable_quickack": false, 00:23:43.066 "enable_placement_id": 0, 00:23:43.066 "enable_zerocopy_send_server": true, 00:23:43.066 "enable_zerocopy_send_client": false, 00:23:43.066 "zerocopy_threshold": 0, 00:23:43.066 "tls_version": 0, 00:23:43.066 "enable_ktls": false 00:23:43.066 } 00:23:43.066 } 00:23:43.066 ] 00:23:43.066 }, 00:23:43.066 { 00:23:43.066 "subsystem": "vmd", 00:23:43.066 "config": [] 00:23:43.066 }, 00:23:43.066 { 00:23:43.066 "subsystem": "accel", 00:23:43.066 "config": [ 00:23:43.066 { 00:23:43.066 "method": "accel_set_options", 00:23:43.066 "params": { 00:23:43.066 "small_cache_size": 128, 00:23:43.066 "large_cache_size": 16, 00:23:43.066 "task_count": 2048, 00:23:43.066 "sequence_count": 2048, 00:23:43.066 "buf_count": 2048 00:23:43.066 } 00:23:43.066 } 00:23:43.066 ] 00:23:43.066 }, 00:23:43.066 { 00:23:43.066 "subsystem": "bdev", 00:23:43.066 "config": [ 00:23:43.066 { 00:23:43.066 "method": "bdev_set_options", 00:23:43.066 "params": { 00:23:43.066 "bdev_io_pool_size": 65535, 00:23:43.066 "bdev_io_cache_size": 256, 00:23:43.066 "bdev_auto_examine": true, 00:23:43.066 "iobuf_small_cache_size": 128, 00:23:43.066 "iobuf_large_cache_size": 16 00:23:43.066 } 00:23:43.066 }, 00:23:43.066 { 00:23:43.066 "method": "bdev_raid_set_options", 00:23:43.066 "params": { 00:23:43.066 "process_window_size_kb": 1024, 00:23:43.066 "process_max_bandwidth_mb_sec": 0 00:23:43.066 } 00:23:43.066 }, 00:23:43.066 { 00:23:43.066 "method": "bdev_iscsi_set_options", 00:23:43.066 "params": { 00:23:43.066 "timeout_sec": 30 00:23:43.066 } 00:23:43.066 }, 00:23:43.066 { 00:23:43.066 "method": "bdev_nvme_set_options", 00:23:43.066 "params": { 00:23:43.066 "action_on_timeout": "none", 00:23:43.066 "timeout_us": 0, 00:23:43.066 "timeout_admin_us": 0, 00:23:43.066 "keep_alive_timeout_ms": 10000, 00:23:43.066 "arbitration_burst": 0, 00:23:43.066 "low_priority_weight": 0, 00:23:43.066 "medium_priority_weight": 0, 00:23:43.066 "high_priority_weight": 0, 00:23:43.066 "nvme_adminq_poll_period_us": 10000, 00:23:43.066 "nvme_ioq_poll_period_us": 0, 00:23:43.066 "io_queue_requests": 0, 00:23:43.066 "delay_cmd_submit": true, 00:23:43.066 "transport_retry_count": 4, 00:23:43.066 "bdev_retry_count": 3, 00:23:43.066 "transport_ack_timeout": 0, 00:23:43.066 "ctrlr_loss_timeout_sec": 0, 00:23:43.067 "reconnect_delay_sec": 0, 00:23:43.067 "fast_io_fail_timeout_sec": 0, 00:23:43.067 "disable_auto_failback": false, 00:23:43.067 "generate_uuids": false, 00:23:43.067 "transport_tos": 0, 00:23:43.067 "nvme_error_stat": false, 00:23:43.067 "rdma_srq_size": 0, 00:23:43.067 "io_path_stat": false, 00:23:43.067 "allow_accel_sequence": false, 00:23:43.067 "rdma_max_cq_size": 0, 00:23:43.067 "rdma_cm_event_timeout_ms": 0, 00:23:43.067 "dhchap_digests": [ 00:23:43.067 "sha256", 00:23:43.067 "sha384", 00:23:43.067 "sha512" 00:23:43.067 ], 00:23:43.067 "dhchap_dhgroups": [ 00:23:43.067 "null", 00:23:43.067 "ffdhe2048", 00:23:43.067 "ffdhe3072", 00:23:43.067 "ffdhe4096", 00:23:43.067 "ffdhe6144", 00:23:43.067 "ffdhe8192" 00:23:43.067 ] 00:23:43.067 } 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "method": "bdev_nvme_set_hotplug", 00:23:43.067 "params": { 00:23:43.067 "period_us": 100000, 00:23:43.067 "enable": false 00:23:43.067 } 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "method": "bdev_malloc_create", 00:23:43.067 "params": { 00:23:43.067 "name": "malloc0", 00:23:43.067 "num_blocks": 8192, 00:23:43.067 "block_size": 4096, 00:23:43.067 "physical_block_size": 4096, 00:23:43.067 "uuid": "dfca0b4a-a218-4ee5-8e14-c7c5ce04c854", 00:23:43.067 "optimal_io_boundary": 0, 00:23:43.067 "md_size": 0, 00:23:43.067 "dif_type": 0, 00:23:43.067 "dif_is_head_of_md": false, 00:23:43.067 "dif_pi_format": 0 00:23:43.067 } 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "method": "bdev_wait_for_examine" 00:23:43.067 } 00:23:43.067 ] 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "subsystem": "nbd", 00:23:43.067 "config": [] 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "subsystem": "scheduler", 00:23:43.067 "config": [ 00:23:43.067 { 00:23:43.067 "method": "framework_set_scheduler", 00:23:43.067 "params": { 00:23:43.067 "name": "static" 00:23:43.067 } 00:23:43.067 } 00:23:43.067 ] 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "subsystem": "nvmf", 00:23:43.067 "config": [ 00:23:43.067 { 00:23:43.067 "method": "nvmf_set_config", 00:23:43.067 "params": { 00:23:43.067 "discovery_filter": "match_any", 00:23:43.067 "admin_cmd_passthru": { 00:23:43.067 "identify_ctrlr": false 00:23:43.067 }, 00:23:43.067 "dhchap_digests": [ 00:23:43.067 "sha256", 00:23:43.067 "sha384", 00:23:43.067 "sha512" 00:23:43.067 ], 00:23:43.067 "dhchap_dhgroups": [ 00:23:43.067 "null", 00:23:43.067 "ffdhe2048", 00:23:43.067 "ffdhe3072", 00:23:43.067 "ffdhe4096", 00:23:43.067 "ffdhe6144", 00:23:43.067 "ffdhe8192" 00:23:43.067 ] 00:23:43.067 } 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "method": "nvmf_set_max_subsystems", 00:23:43.067 "params": { 00:23:43.067 "max_subsystems": 1024 00:23:43.067 } 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "method": "nvmf_set_crdt", 00:23:43.067 "params": { 00:23:43.067 "crdt1": 0, 00:23:43.067 "crdt2": 0, 00:23:43.067 "crdt3": 0 00:23:43.067 } 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "method": "nvmf_create_transport", 00:23:43.067 "params": { 00:23:43.067 "trtype": "TCP", 00:23:43.067 "max_queue_depth": 128, 00:23:43.067 "max_io_qpairs_per_ctrlr": 127, 00:23:43.067 "in_capsule_data_size": 4096, 00:23:43.067 "max_io_size": 131072, 00:23:43.067 "io_unit_size": 131072, 00:23:43.067 "max_aq_depth": 128, 00:23:43.067 "num_shared_buffers": 511, 00:23:43.067 "buf_cache_size": 4294967295, 00:23:43.067 "dif_insert_or_strip": false, 00:23:43.067 "zcopy": false, 00:23:43.067 "c2h_success": false, 00:23:43.067 "sock_priority": 0, 00:23:43.067 "abort_timeout_sec": 1, 00:23:43.067 "ack_timeout": 0, 00:23:43.067 "data_wr_pool_size": 0 00:23:43.067 } 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "method": "nvmf_create_subsystem", 00:23:43.067 "params": { 00:23:43.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.067 "allow_any_host": false, 00:23:43.067 "serial_number": "SPDK00000000000001", 00:23:43.067 "model_number": "SPDK bdev Controller", 00:23:43.067 "max_namespaces": 10, 00:23:43.067 "min_cntlid": 1, 00:23:43.067 "max_cntlid": 65519, 00:23:43.067 "ana_reporting": false 00:23:43.067 } 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "method": "nvmf_subsystem_add_host", 00:23:43.067 "params": { 00:23:43.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.067 "host": "nqn.2016-06.io.spdk:host1", 00:23:43.067 "psk": "key0" 00:23:43.067 } 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "method": "nvmf_subsystem_add_ns", 00:23:43.067 "params": { 00:23:43.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.067 "namespace": { 00:23:43.067 "nsid": 1, 00:23:43.067 "bdev_name": "malloc0", 00:23:43.067 "nguid": "DFCA0B4AA2184EE58E14C7C5CE04C854", 00:23:43.067 "uuid": "dfca0b4a-a218-4ee5-8e14-c7c5ce04c854", 00:23:43.067 "no_auto_visible": false 00:23:43.067 } 00:23:43.067 } 00:23:43.067 }, 00:23:43.067 { 00:23:43.067 "method": "nvmf_subsystem_add_listener", 00:23:43.067 "params": { 00:23:43.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.067 "listen_address": { 00:23:43.067 "trtype": "TCP", 00:23:43.067 "adrfam": "IPv4", 00:23:43.067 "traddr": "10.0.0.2", 00:23:43.067 "trsvcid": "4420" 00:23:43.067 }, 00:23:43.067 "secure_channel": true 00:23:43.067 } 00:23:43.067 } 00:23:43.067 ] 00:23:43.067 } 00:23:43.067 ] 00:23:43.067 }' 00:23:43.067 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=768332 00:23:43.067 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:43.067 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 768332 00:23:43.067 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 768332 ']' 00:23:43.067 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.067 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.067 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.067 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.067 18:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.325 [2024-11-17 18:44:29.677081] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:43.325 [2024-11-17 18:44:29.677156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.325 [2024-11-17 18:44:29.748616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.325 [2024-11-17 18:44:29.793490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.325 [2024-11-17 18:44:29.793540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.325 [2024-11-17 18:44:29.793560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.325 [2024-11-17 18:44:29.793572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.325 [2024-11-17 18:44:29.793581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.325 [2024-11-17 18:44:29.794208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.583 [2024-11-17 18:44:30.030664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.583 [2024-11-17 18:44:30.062707] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.583 [2024-11-17 18:44:30.062974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=768479 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 768479 /var/tmp/bdevperf.sock 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 768479 ']' 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.149 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:44.149 "subsystems": [ 00:23:44.149 { 00:23:44.149 "subsystem": "keyring", 00:23:44.149 "config": [ 00:23:44.149 { 00:23:44.149 "method": "keyring_file_add_key", 00:23:44.149 "params": { 00:23:44.149 "name": "key0", 00:23:44.149 "path": "/tmp/tmp.U3GzqFrrFs" 00:23:44.149 } 00:23:44.149 } 00:23:44.149 ] 00:23:44.149 }, 00:23:44.149 { 00:23:44.149 "subsystem": "iobuf", 00:23:44.149 "config": [ 00:23:44.149 { 00:23:44.149 "method": "iobuf_set_options", 00:23:44.149 "params": { 00:23:44.149 "small_pool_count": 8192, 00:23:44.149 "large_pool_count": 1024, 00:23:44.149 "small_bufsize": 8192, 00:23:44.149 "large_bufsize": 135168, 00:23:44.149 "enable_numa": false 00:23:44.149 } 00:23:44.149 } 00:23:44.149 ] 00:23:44.149 }, 00:23:44.149 { 00:23:44.149 "subsystem": "sock", 00:23:44.149 "config": [ 00:23:44.149 { 00:23:44.149 "method": "sock_set_default_impl", 00:23:44.149 "params": { 00:23:44.149 "impl_name": "posix" 00:23:44.149 } 00:23:44.149 }, 00:23:44.149 { 00:23:44.149 "method": "sock_impl_set_options", 00:23:44.149 "params": { 00:23:44.149 "impl_name": "ssl", 00:23:44.149 "recv_buf_size": 4096, 00:23:44.149 "send_buf_size": 4096, 00:23:44.149 "enable_recv_pipe": true, 00:23:44.149 "enable_quickack": false, 00:23:44.149 "enable_placement_id": 0, 00:23:44.149 "enable_zerocopy_send_server": true, 00:23:44.149 "enable_zerocopy_send_client": false, 00:23:44.149 "zerocopy_threshold": 0, 00:23:44.149 "tls_version": 0, 00:23:44.149 "enable_ktls": false 00:23:44.149 } 00:23:44.149 }, 00:23:44.149 { 00:23:44.149 "method": "sock_impl_set_options", 00:23:44.149 "params": { 00:23:44.149 "impl_name": "posix", 00:23:44.149 "recv_buf_size": 2097152, 00:23:44.149 "send_buf_size": 2097152, 00:23:44.149 "enable_recv_pipe": true, 00:23:44.149 "enable_quickack": false, 00:23:44.149 "enable_placement_id": 0, 00:23:44.149 "enable_zerocopy_send_server": true, 00:23:44.149 "enable_zerocopy_send_client": false, 00:23:44.149 "zerocopy_threshold": 0, 00:23:44.149 "tls_version": 0, 00:23:44.149 "enable_ktls": false 00:23:44.149 } 00:23:44.149 } 00:23:44.149 ] 00:23:44.149 }, 00:23:44.149 { 00:23:44.149 "subsystem": "vmd", 00:23:44.149 "config": [] 00:23:44.149 }, 00:23:44.149 { 00:23:44.149 "subsystem": "accel", 00:23:44.149 "config": [ 00:23:44.149 { 00:23:44.149 "method": "accel_set_options", 00:23:44.149 "params": { 00:23:44.149 "small_cache_size": 128, 00:23:44.149 "large_cache_size": 16, 00:23:44.149 "task_count": 2048, 00:23:44.149 "sequence_count": 2048, 00:23:44.149 "buf_count": 2048 00:23:44.149 } 00:23:44.149 } 00:23:44.149 ] 00:23:44.149 }, 00:23:44.149 { 00:23:44.149 "subsystem": "bdev", 00:23:44.149 "config": [ 00:23:44.149 { 00:23:44.149 "method": "bdev_set_options", 00:23:44.149 "params": { 00:23:44.149 "bdev_io_pool_size": 65535, 00:23:44.149 "bdev_io_cache_size": 256, 00:23:44.149 "bdev_auto_examine": true, 00:23:44.149 "iobuf_small_cache_size": 128, 00:23:44.149 "iobuf_large_cache_size": 16 00:23:44.150 } 00:23:44.150 }, 00:23:44.150 { 00:23:44.150 "method": "bdev_raid_set_options", 00:23:44.150 "params": { 00:23:44.150 "process_window_size_kb": 1024, 00:23:44.150 "process_max_bandwidth_mb_sec": 0 00:23:44.150 } 00:23:44.150 }, 00:23:44.150 { 00:23:44.150 "method": "bdev_iscsi_set_options", 00:23:44.150 "params": { 00:23:44.150 "timeout_sec": 30 00:23:44.150 } 00:23:44.150 }, 00:23:44.150 { 00:23:44.150 "method": "bdev_nvme_set_options", 00:23:44.150 "params": { 00:23:44.150 "action_on_timeout": "none", 00:23:44.150 "timeout_us": 0, 00:23:44.150 "timeout_admin_us": 0, 00:23:44.150 "keep_alive_timeout_ms": 10000, 00:23:44.150 "arbitration_burst": 0, 00:23:44.150 "low_priority_weight": 0, 00:23:44.150 "medium_priority_weight": 0, 00:23:44.150 "high_priority_weight": 0, 00:23:44.150 "nvme_adminq_poll_period_us": 10000, 00:23:44.150 "nvme_ioq_poll_period_us": 0, 00:23:44.150 "io_queue_requests": 512, 00:23:44.150 "delay_cmd_submit": true, 00:23:44.150 "transport_retry_count": 4, 00:23:44.150 "bdev_retry_count": 3, 00:23:44.150 "transport_ack_timeout": 0, 00:23:44.150 "ctrlr_loss_timeout_sec": 0, 00:23:44.150 "reconnect_delay_sec": 0, 00:23:44.150 "fast_io_fail_timeout_sec": 0, 00:23:44.150 "disable_auto_failback": false, 00:23:44.150 "generate_uuids": false, 00:23:44.150 "transport_tos": 0, 00:23:44.150 "nvme_error_stat": false, 00:23:44.150 "rdma_srq_size": 0, 00:23:44.150 "io_path_stat": false, 00:23:44.150 "allow_accel_sequence": false, 00:23:44.150 "rdma_max_cq_size": 0, 00:23:44.150 "rdma_cm_event_timeout_ms": 0, 00:23:44.150 "dhchap_digests": [ 00:23:44.150 "sha256", 00:23:44.150 "sha384", 00:23:44.150 "sha512" 00:23:44.150 ], 00:23:44.150 "dhchap_dhgroups": [ 00:23:44.150 "null", 00:23:44.150 "ffdhe2048", 00:23:44.150 "ffdhe3072", 00:23:44.150 "ffdhe4096", 00:23:44.150 "ffdhe6144", 00:23:44.150 "ffdhe8192" 00:23:44.150 ] 00:23:44.150 } 00:23:44.150 }, 00:23:44.150 { 00:23:44.150 "method": "bdev_nvme_attach_controller", 00:23:44.150 "params": { 00:23:44.150 "name": "TLSTEST", 00:23:44.150 "trtype": "TCP", 00:23:44.150 "adrfam": "IPv4", 00:23:44.150 "traddr": "10.0.0.2", 00:23:44.150 "trsvcid": "4420", 00:23:44.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.150 "prchk_reftag": false, 00:23:44.150 "prchk_guard": false, 00:23:44.150 "ctrlr_loss_timeout_sec": 0, 00:23:44.150 "reconnect_delay_sec": 0, 00:23:44.150 "fast_io_fail_timeout_sec": 0, 00:23:44.150 "psk": "key0", 00:23:44.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.150 "hdgst": false, 00:23:44.150 "ddgst": false, 00:23:44.150 "multipath": "multipath" 00:23:44.150 } 00:23:44.150 }, 00:23:44.150 { 00:23:44.150 "method": "bdev_nvme_set_hotplug", 00:23:44.150 "params": { 00:23:44.150 "period_us": 100000, 00:23:44.150 "enable": false 00:23:44.150 } 00:23:44.150 }, 00:23:44.150 { 00:23:44.150 "method": "bdev_wait_for_examine" 00:23:44.150 } 00:23:44.150 ] 00:23:44.150 }, 00:23:44.150 { 00:23:44.150 "subsystem": "nbd", 00:23:44.150 "config": [] 00:23:44.150 } 00:23:44.150 ] 00:23:44.150 }' 00:23:44.150 18:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.408 [2024-11-17 18:44:30.738264] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:44.408 [2024-11-17 18:44:30.738333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid768479 ] 00:23:44.408 [2024-11-17 18:44:30.808312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.408 [2024-11-17 18:44:30.853557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.666 [2024-11-17 18:44:31.022994] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.666 18:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.666 18:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:44.666 18:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:44.666 Running I/O for 10 seconds... 00:23:46.974 3053.00 IOPS, 11.93 MiB/s [2024-11-17T17:44:34.483Z] 3095.00 IOPS, 12.09 MiB/s [2024-11-17T17:44:35.417Z] 3140.00 IOPS, 12.27 MiB/s [2024-11-17T17:44:36.350Z] 3129.00 IOPS, 12.22 MiB/s [2024-11-17T17:44:37.331Z] 3159.20 IOPS, 12.34 MiB/s [2024-11-17T17:44:38.280Z] 3171.33 IOPS, 12.39 MiB/s [2024-11-17T17:44:39.653Z] 3167.14 IOPS, 12.37 MiB/s [2024-11-17T17:44:40.585Z] 3173.12 IOPS, 12.40 MiB/s [2024-11-17T17:44:41.517Z] 3168.00 IOPS, 12.38 MiB/s [2024-11-17T17:44:41.517Z] 3174.40 IOPS, 12.40 MiB/s 00:23:54.941 Latency(us) 00:23:54.941 [2024-11-17T17:44:41.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.941 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:54.941 Verification LBA range: start 0x0 length 0x2000 00:23:54.941 TLSTESTn1 : 10.03 3177.40 12.41 0.00 0.00 40199.58 9369.22 55535.69 00:23:54.941 [2024-11-17T17:44:41.517Z] =================================================================================================================== 00:23:54.941 [2024-11-17T17:44:41.517Z] Total : 3177.40 12.41 0.00 0.00 40199.58 9369.22 55535.69 00:23:54.941 { 00:23:54.941 "results": [ 00:23:54.941 { 00:23:54.941 "job": "TLSTESTn1", 00:23:54.941 "core_mask": "0x4", 00:23:54.941 "workload": "verify", 00:23:54.941 "status": "finished", 00:23:54.941 "verify_range": { 00:23:54.941 "start": 0, 00:23:54.941 "length": 8192 00:23:54.941 }, 00:23:54.941 "queue_depth": 128, 00:23:54.941 "io_size": 4096, 00:23:54.941 "runtime": 10.030834, 00:23:54.941 "iops": 3177.402796218141, 00:23:54.941 "mibps": 12.411729672727112, 00:23:54.941 "io_failed": 0, 00:23:54.941 "io_timeout": 0, 00:23:54.941 "avg_latency_us": 40199.58061579651, 00:23:54.941 "min_latency_us": 9369.22074074074, 00:23:54.941 "max_latency_us": 55535.69185185185 00:23:54.941 } 00:23:54.941 ], 00:23:54.941 "core_count": 1 00:23:54.941 } 00:23:54.941 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.941 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 768479 00:23:54.941 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 768479 ']' 00:23:54.942 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 768479 00:23:54.942 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:54.942 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.942 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768479 00:23:54.942 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:54.942 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:54.942 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768479' 00:23:54.942 killing process with pid 768479 00:23:54.942 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 768479 00:23:54.942 Received shutdown signal, test time was about 10.000000 seconds 00:23:54.942 00:23:54.942 Latency(us) 00:23:54.942 [2024-11-17T17:44:41.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.942 [2024-11-17T17:44:41.518Z] =================================================================================================================== 00:23:54.942 [2024-11-17T17:44:41.518Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.942 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 768479 00:23:55.201 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 768332 00:23:55.201 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 768332 ']' 00:23:55.201 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 768332 00:23:55.201 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:55.201 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.201 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768332 00:23:55.201 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:55.201 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:55.201 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768332' 00:23:55.201 killing process with pid 768332 00:23:55.201 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 768332 00:23:55.201 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 768332 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=769736 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 769736 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 769736 ']' 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.459 18:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.459 [2024-11-17 18:44:41.859972] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:55.459 [2024-11-17 18:44:41.860082] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.459 [2024-11-17 18:44:41.933215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.459 [2024-11-17 18:44:41.977016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.459 [2024-11-17 18:44:41.977072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.459 [2024-11-17 18:44:41.977095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.459 [2024-11-17 18:44:41.977108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.459 [2024-11-17 18:44:41.977119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.459 [2024-11-17 18:44:41.977682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.717 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.717 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:55.717 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.717 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.717 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.717 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.717 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.U3GzqFrrFs 00:23:55.717 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.U3GzqFrrFs 00:23:55.717 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:55.975 [2024-11-17 18:44:42.364541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.975 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:56.233 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:56.491 [2024-11-17 18:44:42.881887] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:56.491 [2024-11-17 18:44:42.882154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.491 18:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:56.750 malloc0 00:23:56.750 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:57.008 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.U3GzqFrrFs 00:23:57.265 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:57.524 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=769980 00:23:57.524 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:57.524 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:57.524 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 769980 /var/tmp/bdevperf.sock 00:23:57.524 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 769980 ']' 00:23:57.524 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.524 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.524 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.524 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.524 18:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.524 [2024-11-17 18:44:44.049171] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:23:57.524 [2024-11-17 18:44:44.049279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769980 ] 00:23:57.782 [2024-11-17 18:44:44.124089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.782 [2024-11-17 18:44:44.170546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.782 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.782 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.782 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.U3GzqFrrFs 00:23:58.040 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:58.298 [2024-11-17 18:44:44.826856] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.556 nvme0n1 00:23:58.556 18:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:58.556 Running I/O for 1 seconds... 00:23:59.489 3567.00 IOPS, 13.93 MiB/s 00:23:59.489 Latency(us) 00:23:59.489 [2024-11-17T17:44:46.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.489 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:59.489 Verification LBA range: start 0x0 length 0x2000 00:23:59.489 nvme0n1 : 1.02 3622.02 14.15 0.00 0.00 35022.78 6407.96 29515.47 00:23:59.489 [2024-11-17T17:44:46.065Z] =================================================================================================================== 00:23:59.489 [2024-11-17T17:44:46.065Z] Total : 3622.02 14.15 0.00 0.00 35022.78 6407.96 29515.47 00:23:59.489 { 00:23:59.489 "results": [ 00:23:59.489 { 00:23:59.489 "job": "nvme0n1", 00:23:59.489 "core_mask": "0x2", 00:23:59.489 "workload": "verify", 00:23:59.489 "status": "finished", 00:23:59.489 "verify_range": { 00:23:59.489 "start": 0, 00:23:59.489 "length": 8192 00:23:59.489 }, 00:23:59.489 "queue_depth": 128, 00:23:59.489 "io_size": 4096, 00:23:59.489 "runtime": 1.02015, 00:23:59.489 "iops": 3622.016370141646, 00:23:59.489 "mibps": 14.148501445865804, 00:23:59.489 "io_failed": 0, 00:23:59.489 "io_timeout": 0, 00:23:59.489 "avg_latency_us": 35022.782617551245, 00:23:59.489 "min_latency_us": 6407.964444444445, 00:23:59.489 "max_latency_us": 29515.472592592592 00:23:59.489 } 00:23:59.489 ], 00:23:59.489 "core_count": 1 00:23:59.489 } 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 769980 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 769980 ']' 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 769980 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 769980 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 769980' 00:23:59.747 killing process with pid 769980 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 769980 00:23:59.747 Received shutdown signal, test time was about 1.000000 seconds 00:23:59.747 00:23:59.747 Latency(us) 00:23:59.747 [2024-11-17T17:44:46.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.747 [2024-11-17T17:44:46.323Z] =================================================================================================================== 00:23:59.747 [2024-11-17T17:44:46.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 769980 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 769736 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 769736 ']' 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 769736 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.747 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 769736 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 769736' 00:24:00.008 killing process with pid 769736 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 769736 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 769736 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=770372 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 770372 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 770372 ']' 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.008 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.267 [2024-11-17 18:44:46.616527] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:00.267 [2024-11-17 18:44:46.616642] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.267 [2024-11-17 18:44:46.688452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.267 [2024-11-17 18:44:46.728071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.267 [2024-11-17 18:44:46.728135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.267 [2024-11-17 18:44:46.728157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.267 [2024-11-17 18:44:46.728168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.267 [2024-11-17 18:44:46.728178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.267 [2024-11-17 18:44:46.728728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.267 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.267 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.267 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.267 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.267 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.525 [2024-11-17 18:44:46.866836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.525 malloc0 00:24:00.525 [2024-11-17 18:44:46.898578] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:00.525 [2024-11-17 18:44:46.898851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=770403 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 770403 /var/tmp/bdevperf.sock 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 770403 ']' 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.525 18:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.525 [2024-11-17 18:44:46.973121] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:00.525 [2024-11-17 18:44:46.973213] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770403 ] 00:24:00.526 [2024-11-17 18:44:47.041267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.526 [2024-11-17 18:44:47.087202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.784 18:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.784 18:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.784 18:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.U3GzqFrrFs 00:24:01.042 18:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:01.300 [2024-11-17 18:44:47.707650] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.300 nvme0n1 00:24:01.300 18:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:01.558 Running I/O for 1 seconds... 00:24:02.493 3436.00 IOPS, 13.42 MiB/s 00:24:02.493 Latency(us) 00:24:02.493 [2024-11-17T17:44:49.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.493 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:02.493 Verification LBA range: start 0x0 length 0x2000 00:24:02.493 nvme0n1 : 1.02 3490.27 13.63 0.00 0.00 36317.54 8786.68 33399.09 00:24:02.493 [2024-11-17T17:44:49.069Z] =================================================================================================================== 00:24:02.493 [2024-11-17T17:44:49.069Z] Total : 3490.27 13.63 0.00 0.00 36317.54 8786.68 33399.09 00:24:02.493 { 00:24:02.493 "results": [ 00:24:02.493 { 00:24:02.493 "job": "nvme0n1", 00:24:02.493 "core_mask": "0x2", 00:24:02.493 "workload": "verify", 00:24:02.493 "status": "finished", 00:24:02.493 "verify_range": { 00:24:02.493 "start": 0, 00:24:02.493 "length": 8192 00:24:02.493 }, 00:24:02.493 "queue_depth": 128, 00:24:02.493 "io_size": 4096, 00:24:02.493 "runtime": 1.021412, 00:24:02.493 "iops": 3490.266415511077, 00:24:02.493 "mibps": 13.633853185590144, 00:24:02.493 "io_failed": 0, 00:24:02.493 "io_timeout": 0, 00:24:02.493 "avg_latency_us": 36317.53815261545, 00:24:02.493 "min_latency_us": 8786.678518518518, 00:24:02.493 "max_latency_us": 33399.08740740741 00:24:02.493 } 00:24:02.493 ], 00:24:02.493 "core_count": 1 00:24:02.493 } 00:24:02.493 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:02.493 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.493 18:44:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.493 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.493 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:02.493 "subsystems": [ 00:24:02.493 { 00:24:02.493 "subsystem": "keyring", 00:24:02.493 "config": [ 00:24:02.493 { 00:24:02.493 "method": "keyring_file_add_key", 00:24:02.493 "params": { 00:24:02.493 "name": "key0", 00:24:02.493 "path": "/tmp/tmp.U3GzqFrrFs" 00:24:02.493 } 00:24:02.493 } 00:24:02.493 ] 00:24:02.493 }, 00:24:02.493 { 00:24:02.493 "subsystem": "iobuf", 00:24:02.493 "config": [ 00:24:02.493 { 00:24:02.493 "method": "iobuf_set_options", 00:24:02.493 "params": { 00:24:02.494 "small_pool_count": 8192, 00:24:02.494 "large_pool_count": 1024, 00:24:02.494 "small_bufsize": 8192, 00:24:02.494 "large_bufsize": 135168, 00:24:02.494 "enable_numa": false 00:24:02.494 } 00:24:02.494 } 00:24:02.494 ] 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "subsystem": "sock", 00:24:02.494 "config": [ 00:24:02.494 { 00:24:02.494 "method": "sock_set_default_impl", 00:24:02.494 "params": { 00:24:02.494 "impl_name": "posix" 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "sock_impl_set_options", 00:24:02.494 "params": { 00:24:02.494 "impl_name": "ssl", 00:24:02.494 "recv_buf_size": 4096, 00:24:02.494 "send_buf_size": 4096, 00:24:02.494 "enable_recv_pipe": true, 00:24:02.494 "enable_quickack": false, 00:24:02.494 "enable_placement_id": 0, 00:24:02.494 "enable_zerocopy_send_server": true, 00:24:02.494 "enable_zerocopy_send_client": false, 00:24:02.494 "zerocopy_threshold": 0, 00:24:02.494 "tls_version": 0, 00:24:02.494 "enable_ktls": false 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "sock_impl_set_options", 00:24:02.494 "params": { 00:24:02.494 "impl_name": "posix", 00:24:02.494 "recv_buf_size": 2097152, 00:24:02.494 "send_buf_size": 2097152, 00:24:02.494 "enable_recv_pipe": true, 00:24:02.494 "enable_quickack": false, 00:24:02.494 "enable_placement_id": 0, 00:24:02.494 "enable_zerocopy_send_server": true, 00:24:02.494 "enable_zerocopy_send_client": false, 00:24:02.494 "zerocopy_threshold": 0, 00:24:02.494 "tls_version": 0, 00:24:02.494 "enable_ktls": false 00:24:02.494 } 00:24:02.494 } 00:24:02.494 ] 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "subsystem": "vmd", 00:24:02.494 "config": [] 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "subsystem": "accel", 00:24:02.494 "config": [ 00:24:02.494 { 00:24:02.494 "method": "accel_set_options", 00:24:02.494 "params": { 00:24:02.494 "small_cache_size": 128, 00:24:02.494 "large_cache_size": 16, 00:24:02.494 "task_count": 2048, 00:24:02.494 "sequence_count": 2048, 00:24:02.494 "buf_count": 2048 00:24:02.494 } 00:24:02.494 } 00:24:02.494 ] 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "subsystem": "bdev", 00:24:02.494 "config": [ 00:24:02.494 { 00:24:02.494 "method": "bdev_set_options", 00:24:02.494 "params": { 00:24:02.494 "bdev_io_pool_size": 65535, 00:24:02.494 "bdev_io_cache_size": 256, 00:24:02.494 "bdev_auto_examine": true, 00:24:02.494 "iobuf_small_cache_size": 128, 00:24:02.494 "iobuf_large_cache_size": 16 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "bdev_raid_set_options", 00:24:02.494 "params": { 00:24:02.494 "process_window_size_kb": 1024, 00:24:02.494 "process_max_bandwidth_mb_sec": 0 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "bdev_iscsi_set_options", 00:24:02.494 "params": { 00:24:02.494 "timeout_sec": 30 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "bdev_nvme_set_options", 00:24:02.494 "params": { 00:24:02.494 "action_on_timeout": "none", 00:24:02.494 "timeout_us": 0, 00:24:02.494 "timeout_admin_us": 0, 00:24:02.494 "keep_alive_timeout_ms": 10000, 00:24:02.494 "arbitration_burst": 0, 00:24:02.494 "low_priority_weight": 0, 00:24:02.494 "medium_priority_weight": 0, 00:24:02.494 "high_priority_weight": 0, 00:24:02.494 "nvme_adminq_poll_period_us": 10000, 00:24:02.494 "nvme_ioq_poll_period_us": 0, 00:24:02.494 "io_queue_requests": 0, 00:24:02.494 "delay_cmd_submit": true, 00:24:02.494 "transport_retry_count": 4, 00:24:02.494 "bdev_retry_count": 3, 00:24:02.494 "transport_ack_timeout": 0, 00:24:02.494 "ctrlr_loss_timeout_sec": 0, 00:24:02.494 "reconnect_delay_sec": 0, 00:24:02.494 "fast_io_fail_timeout_sec": 0, 00:24:02.494 "disable_auto_failback": false, 00:24:02.494 "generate_uuids": false, 00:24:02.494 "transport_tos": 0, 00:24:02.494 "nvme_error_stat": false, 00:24:02.494 "rdma_srq_size": 0, 00:24:02.494 "io_path_stat": false, 00:24:02.494 "allow_accel_sequence": false, 00:24:02.494 "rdma_max_cq_size": 0, 00:24:02.494 "rdma_cm_event_timeout_ms": 0, 00:24:02.494 "dhchap_digests": [ 00:24:02.494 "sha256", 00:24:02.494 "sha384", 00:24:02.494 "sha512" 00:24:02.494 ], 00:24:02.494 "dhchap_dhgroups": [ 00:24:02.494 "null", 00:24:02.494 "ffdhe2048", 00:24:02.494 "ffdhe3072", 00:24:02.494 "ffdhe4096", 00:24:02.494 "ffdhe6144", 00:24:02.494 "ffdhe8192" 00:24:02.494 ] 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "bdev_nvme_set_hotplug", 00:24:02.494 "params": { 00:24:02.494 "period_us": 100000, 00:24:02.494 "enable": false 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "bdev_malloc_create", 00:24:02.494 "params": { 00:24:02.494 "name": "malloc0", 00:24:02.494 "num_blocks": 8192, 00:24:02.494 "block_size": 4096, 00:24:02.494 "physical_block_size": 4096, 00:24:02.494 "uuid": "872e1450-309c-46a5-ab63-a1a35ecfe681", 00:24:02.494 "optimal_io_boundary": 0, 00:24:02.494 "md_size": 0, 00:24:02.494 "dif_type": 0, 00:24:02.494 "dif_is_head_of_md": false, 00:24:02.494 "dif_pi_format": 0 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "bdev_wait_for_examine" 00:24:02.494 } 00:24:02.494 ] 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "subsystem": "nbd", 00:24:02.494 "config": [] 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "subsystem": "scheduler", 00:24:02.494 "config": [ 00:24:02.494 { 00:24:02.494 "method": "framework_set_scheduler", 00:24:02.494 "params": { 00:24:02.494 "name": "static" 00:24:02.494 } 00:24:02.494 } 00:24:02.494 ] 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "subsystem": "nvmf", 00:24:02.494 "config": [ 00:24:02.494 { 00:24:02.494 "method": "nvmf_set_config", 00:24:02.494 "params": { 00:24:02.494 "discovery_filter": "match_any", 00:24:02.494 "admin_cmd_passthru": { 00:24:02.494 "identify_ctrlr": false 00:24:02.494 }, 00:24:02.494 "dhchap_digests": [ 00:24:02.494 "sha256", 00:24:02.494 "sha384", 00:24:02.494 "sha512" 00:24:02.494 ], 00:24:02.494 "dhchap_dhgroups": [ 00:24:02.494 "null", 00:24:02.494 "ffdhe2048", 00:24:02.494 "ffdhe3072", 00:24:02.494 "ffdhe4096", 00:24:02.494 "ffdhe6144", 00:24:02.494 "ffdhe8192" 00:24:02.494 ] 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "nvmf_set_max_subsystems", 00:24:02.494 "params": { 00:24:02.494 "max_subsystems": 1024 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "nvmf_set_crdt", 00:24:02.494 "params": { 00:24:02.494 "crdt1": 0, 00:24:02.494 "crdt2": 0, 00:24:02.494 "crdt3": 0 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "nvmf_create_transport", 00:24:02.494 "params": { 00:24:02.494 "trtype": "TCP", 00:24:02.494 "max_queue_depth": 128, 00:24:02.494 "max_io_qpairs_per_ctrlr": 127, 00:24:02.494 "in_capsule_data_size": 4096, 00:24:02.494 "max_io_size": 131072, 00:24:02.494 "io_unit_size": 131072, 00:24:02.494 "max_aq_depth": 128, 00:24:02.494 "num_shared_buffers": 511, 00:24:02.494 "buf_cache_size": 4294967295, 00:24:02.494 "dif_insert_or_strip": false, 00:24:02.494 "zcopy": false, 00:24:02.494 "c2h_success": false, 00:24:02.494 "sock_priority": 0, 00:24:02.494 "abort_timeout_sec": 1, 00:24:02.494 "ack_timeout": 0, 00:24:02.494 "data_wr_pool_size": 0 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "nvmf_create_subsystem", 00:24:02.494 "params": { 00:24:02.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.494 "allow_any_host": false, 00:24:02.494 "serial_number": "00000000000000000000", 00:24:02.494 "model_number": "SPDK bdev Controller", 00:24:02.494 "max_namespaces": 32, 00:24:02.494 "min_cntlid": 1, 00:24:02.494 "max_cntlid": 65519, 00:24:02.494 "ana_reporting": false 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "nvmf_subsystem_add_host", 00:24:02.494 "params": { 00:24:02.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.494 "host": "nqn.2016-06.io.spdk:host1", 00:24:02.494 "psk": "key0" 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "nvmf_subsystem_add_ns", 00:24:02.494 "params": { 00:24:02.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.494 "namespace": { 00:24:02.494 "nsid": 1, 00:24:02.494 "bdev_name": "malloc0", 00:24:02.494 "nguid": "872E1450309C46A5AB63A1A35ECFE681", 00:24:02.494 "uuid": "872e1450-309c-46a5-ab63-a1a35ecfe681", 00:24:02.494 "no_auto_visible": false 00:24:02.494 } 00:24:02.494 } 00:24:02.494 }, 00:24:02.494 { 00:24:02.494 "method": "nvmf_subsystem_add_listener", 00:24:02.494 "params": { 00:24:02.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.494 "listen_address": { 00:24:02.494 "trtype": "TCP", 00:24:02.495 "adrfam": "IPv4", 00:24:02.495 "traddr": "10.0.0.2", 00:24:02.495 "trsvcid": "4420" 00:24:02.495 }, 00:24:02.495 "secure_channel": false, 00:24:02.495 "sock_impl": "ssl" 00:24:02.495 } 00:24:02.495 } 00:24:02.495 ] 00:24:02.495 } 00:24:02.495 ] 00:24:02.495 }' 00:24:02.495 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:03.060 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:03.060 "subsystems": [ 00:24:03.060 { 00:24:03.060 "subsystem": "keyring", 00:24:03.060 "config": [ 00:24:03.060 { 00:24:03.060 "method": "keyring_file_add_key", 00:24:03.060 "params": { 00:24:03.060 "name": "key0", 00:24:03.060 "path": "/tmp/tmp.U3GzqFrrFs" 00:24:03.060 } 00:24:03.060 } 00:24:03.060 ] 00:24:03.060 }, 00:24:03.060 { 00:24:03.060 "subsystem": "iobuf", 00:24:03.060 "config": [ 00:24:03.060 { 00:24:03.060 "method": "iobuf_set_options", 00:24:03.060 "params": { 00:24:03.060 "small_pool_count": 8192, 00:24:03.060 "large_pool_count": 1024, 00:24:03.060 "small_bufsize": 8192, 00:24:03.060 "large_bufsize": 135168, 00:24:03.060 "enable_numa": false 00:24:03.060 } 00:24:03.061 } 00:24:03.061 ] 00:24:03.061 }, 00:24:03.061 { 00:24:03.061 "subsystem": "sock", 00:24:03.061 "config": [ 00:24:03.061 { 00:24:03.061 "method": "sock_set_default_impl", 00:24:03.061 "params": { 00:24:03.061 "impl_name": "posix" 00:24:03.061 } 00:24:03.061 }, 00:24:03.061 { 00:24:03.061 "method": "sock_impl_set_options", 00:24:03.061 "params": { 00:24:03.061 "impl_name": "ssl", 00:24:03.061 "recv_buf_size": 4096, 00:24:03.061 "send_buf_size": 4096, 00:24:03.061 "enable_recv_pipe": true, 00:24:03.061 "enable_quickack": false, 00:24:03.061 "enable_placement_id": 0, 00:24:03.061 "enable_zerocopy_send_server": true, 00:24:03.061 "enable_zerocopy_send_client": false, 00:24:03.061 "zerocopy_threshold": 0, 00:24:03.061 "tls_version": 0, 00:24:03.061 "enable_ktls": false 00:24:03.061 } 00:24:03.061 }, 00:24:03.061 { 00:24:03.061 "method": "sock_impl_set_options", 00:24:03.061 "params": { 00:24:03.061 "impl_name": "posix", 00:24:03.061 "recv_buf_size": 2097152, 00:24:03.061 "send_buf_size": 2097152, 00:24:03.061 "enable_recv_pipe": true, 00:24:03.061 "enable_quickack": false, 00:24:03.061 "enable_placement_id": 0, 00:24:03.061 "enable_zerocopy_send_server": true, 00:24:03.061 "enable_zerocopy_send_client": false, 00:24:03.061 "zerocopy_threshold": 0, 00:24:03.061 "tls_version": 0, 00:24:03.061 "enable_ktls": false 00:24:03.061 } 00:24:03.061 } 00:24:03.061 ] 00:24:03.061 }, 00:24:03.061 { 00:24:03.061 "subsystem": "vmd", 00:24:03.061 "config": [] 00:24:03.061 }, 00:24:03.061 { 00:24:03.061 "subsystem": "accel", 00:24:03.061 "config": [ 00:24:03.061 { 00:24:03.061 "method": "accel_set_options", 00:24:03.061 "params": { 00:24:03.061 "small_cache_size": 128, 00:24:03.061 "large_cache_size": 16, 00:24:03.061 "task_count": 2048, 00:24:03.061 "sequence_count": 2048, 00:24:03.061 "buf_count": 2048 00:24:03.061 } 00:24:03.061 } 00:24:03.061 ] 00:24:03.061 }, 00:24:03.061 { 00:24:03.061 "subsystem": "bdev", 00:24:03.061 "config": [ 00:24:03.061 { 00:24:03.061 "method": "bdev_set_options", 00:24:03.061 "params": { 00:24:03.061 "bdev_io_pool_size": 65535, 00:24:03.061 "bdev_io_cache_size": 256, 00:24:03.061 "bdev_auto_examine": true, 00:24:03.061 "iobuf_small_cache_size": 128, 00:24:03.061 "iobuf_large_cache_size": 16 00:24:03.061 } 00:24:03.061 }, 00:24:03.061 { 00:24:03.061 "method": "bdev_raid_set_options", 00:24:03.061 "params": { 00:24:03.061 "process_window_size_kb": 1024, 00:24:03.061 "process_max_bandwidth_mb_sec": 0 00:24:03.061 } 00:24:03.061 }, 00:24:03.061 { 00:24:03.061 "method": "bdev_iscsi_set_options", 00:24:03.061 "params": { 00:24:03.061 "timeout_sec": 30 00:24:03.061 } 00:24:03.061 }, 00:24:03.061 { 00:24:03.061 "method": "bdev_nvme_set_options", 00:24:03.061 "params": { 00:24:03.061 "action_on_timeout": "none", 00:24:03.061 "timeout_us": 0, 00:24:03.061 "timeout_admin_us": 0, 00:24:03.061 "keep_alive_timeout_ms": 10000, 00:24:03.061 "arbitration_burst": 0, 00:24:03.061 "low_priority_weight": 0, 00:24:03.061 "medium_priority_weight": 0, 00:24:03.061 "high_priority_weight": 0, 00:24:03.061 "nvme_adminq_poll_period_us": 10000, 00:24:03.061 "nvme_ioq_poll_period_us": 0, 00:24:03.061 "io_queue_requests": 512, 00:24:03.061 "delay_cmd_submit": true, 00:24:03.061 "transport_retry_count": 4, 00:24:03.061 "bdev_retry_count": 3, 00:24:03.061 "transport_ack_timeout": 0, 00:24:03.061 "ctrlr_loss_timeout_sec": 0, 00:24:03.061 "reconnect_delay_sec": 0, 00:24:03.061 "fast_io_fail_timeout_sec": 0, 00:24:03.061 "disable_auto_failback": false, 00:24:03.061 "generate_uuids": false, 00:24:03.061 "transport_tos": 0, 00:24:03.061 "nvme_error_stat": false, 00:24:03.061 "rdma_srq_size": 0, 00:24:03.061 "io_path_stat": false, 00:24:03.061 "allow_accel_sequence": false, 00:24:03.061 "rdma_max_cq_size": 0, 00:24:03.061 "rdma_cm_event_timeout_ms": 0, 00:24:03.061 "dhchap_digests": [ 00:24:03.061 "sha256", 00:24:03.061 "sha384", 00:24:03.061 "sha512" 00:24:03.061 ], 00:24:03.061 "dhchap_dhgroups": [ 00:24:03.061 "null", 00:24:03.061 "ffdhe2048", 00:24:03.061 "ffdhe3072", 00:24:03.061 "ffdhe4096", 00:24:03.061 "ffdhe6144", 00:24:03.061 "ffdhe8192" 00:24:03.061 ] 00:24:03.061 } 00:24:03.061 }, 00:24:03.061 { 00:24:03.061 "method": "bdev_nvme_attach_controller", 00:24:03.061 "params": { 00:24:03.061 "name": "nvme0", 00:24:03.061 "trtype": "TCP", 00:24:03.061 "adrfam": "IPv4", 00:24:03.061 "traddr": "10.0.0.2", 00:24:03.061 "trsvcid": "4420", 00:24:03.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.061 "prchk_reftag": false, 00:24:03.061 "prchk_guard": false, 00:24:03.061 "ctrlr_loss_timeout_sec": 0, 00:24:03.061 "reconnect_delay_sec": 0, 00:24:03.061 "fast_io_fail_timeout_sec": 0, 00:24:03.061 "psk": "key0", 00:24:03.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.061 "hdgst": false, 00:24:03.061 "ddgst": false, 00:24:03.061 "multipath": "multipath" 00:24:03.062 } 00:24:03.062 }, 00:24:03.062 { 00:24:03.062 "method": "bdev_nvme_set_hotplug", 00:24:03.062 "params": { 00:24:03.062 "period_us": 100000, 00:24:03.062 "enable": false 00:24:03.062 } 00:24:03.062 }, 00:24:03.062 { 00:24:03.062 "method": "bdev_enable_histogram", 00:24:03.062 "params": { 00:24:03.062 "name": "nvme0n1", 00:24:03.062 "enable": true 00:24:03.062 } 00:24:03.062 }, 00:24:03.062 { 00:24:03.062 "method": "bdev_wait_for_examine" 00:24:03.062 } 00:24:03.062 ] 00:24:03.062 }, 00:24:03.062 { 00:24:03.062 "subsystem": "nbd", 00:24:03.062 "config": [] 00:24:03.062 } 00:24:03.062 ] 00:24:03.062 }' 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 770403 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 770403 ']' 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 770403 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770403 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770403' 00:24:03.062 killing process with pid 770403 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 770403 00:24:03.062 Received shutdown signal, test time was about 1.000000 seconds 00:24:03.062 00:24:03.062 Latency(us) 00:24:03.062 [2024-11-17T17:44:49.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.062 [2024-11-17T17:44:49.638Z] =================================================================================================================== 00:24:03.062 [2024-11-17T17:44:49.638Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 770403 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 770372 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 770372 ']' 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 770372 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.062 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770372 00:24:03.321 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.321 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.321 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770372' 00:24:03.321 killing process with pid 770372 00:24:03.321 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 770372 00:24:03.321 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 770372 00:24:03.321 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:03.321 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.321 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:03.321 "subsystems": [ 00:24:03.321 { 00:24:03.321 "subsystem": "keyring", 00:24:03.321 "config": [ 00:24:03.321 { 00:24:03.321 "method": "keyring_file_add_key", 00:24:03.321 "params": { 00:24:03.321 "name": "key0", 00:24:03.321 "path": "/tmp/tmp.U3GzqFrrFs" 00:24:03.321 } 00:24:03.321 } 00:24:03.321 ] 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "subsystem": "iobuf", 00:24:03.321 "config": [ 00:24:03.321 { 00:24:03.321 "method": "iobuf_set_options", 00:24:03.321 "params": { 00:24:03.321 "small_pool_count": 8192, 00:24:03.321 "large_pool_count": 1024, 00:24:03.321 "small_bufsize": 8192, 00:24:03.321 "large_bufsize": 135168, 00:24:03.321 "enable_numa": false 00:24:03.321 } 00:24:03.321 } 00:24:03.321 ] 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "subsystem": "sock", 00:24:03.321 "config": [ 00:24:03.321 { 00:24:03.321 "method": "sock_set_default_impl", 00:24:03.321 "params": { 00:24:03.321 "impl_name": "posix" 00:24:03.321 } 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "method": "sock_impl_set_options", 00:24:03.321 "params": { 00:24:03.321 "impl_name": "ssl", 00:24:03.321 "recv_buf_size": 4096, 00:24:03.321 "send_buf_size": 4096, 00:24:03.321 "enable_recv_pipe": true, 00:24:03.321 "enable_quickack": false, 00:24:03.321 "enable_placement_id": 0, 00:24:03.321 "enable_zerocopy_send_server": true, 00:24:03.321 "enable_zerocopy_send_client": false, 00:24:03.321 "zerocopy_threshold": 0, 00:24:03.321 "tls_version": 0, 00:24:03.321 "enable_ktls": false 00:24:03.321 } 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "method": "sock_impl_set_options", 00:24:03.321 "params": { 00:24:03.321 "impl_name": "posix", 00:24:03.321 "recv_buf_size": 2097152, 00:24:03.321 "send_buf_size": 2097152, 00:24:03.321 "enable_recv_pipe": true, 00:24:03.321 "enable_quickack": false, 00:24:03.321 "enable_placement_id": 0, 00:24:03.321 "enable_zerocopy_send_server": true, 00:24:03.321 "enable_zerocopy_send_client": false, 00:24:03.321 "zerocopy_threshold": 0, 00:24:03.321 "tls_version": 0, 00:24:03.321 "enable_ktls": false 00:24:03.321 } 00:24:03.321 } 00:24:03.321 ] 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "subsystem": "vmd", 00:24:03.321 "config": [] 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "subsystem": "accel", 00:24:03.321 "config": [ 00:24:03.321 { 00:24:03.321 "method": "accel_set_options", 00:24:03.321 "params": { 00:24:03.321 "small_cache_size": 128, 00:24:03.321 "large_cache_size": 16, 00:24:03.321 "task_count": 2048, 00:24:03.321 "sequence_count": 2048, 00:24:03.321 "buf_count": 2048 00:24:03.321 } 00:24:03.321 } 00:24:03.321 ] 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "subsystem": "bdev", 00:24:03.321 "config": [ 00:24:03.321 { 00:24:03.321 "method": "bdev_set_options", 00:24:03.321 "params": { 00:24:03.321 "bdev_io_pool_size": 65535, 00:24:03.321 "bdev_io_cache_size": 256, 00:24:03.321 "bdev_auto_examine": true, 00:24:03.321 "iobuf_small_cache_size": 128, 00:24:03.321 "iobuf_large_cache_size": 16 00:24:03.321 } 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "method": "bdev_raid_set_options", 00:24:03.321 "params": { 00:24:03.321 "process_window_size_kb": 1024, 00:24:03.321 "process_max_bandwidth_mb_sec": 0 00:24:03.321 } 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "method": "bdev_iscsi_set_options", 00:24:03.321 "params": { 00:24:03.321 "timeout_sec": 30 00:24:03.321 } 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "method": "bdev_nvme_set_options", 00:24:03.321 "params": { 00:24:03.321 "action_on_timeout": "none", 00:24:03.321 "timeout_us": 0, 00:24:03.321 "timeout_admin_us": 0, 00:24:03.321 "keep_alive_timeout_ms": 10000, 00:24:03.321 "arbitration_burst": 0, 00:24:03.321 "low_priority_weight": 0, 00:24:03.321 "medium_priority_weight": 0, 00:24:03.321 "high_priority_weight": 0, 00:24:03.321 "nvme_adminq_poll_period_us": 10000, 00:24:03.321 "nvme_ioq_poll_period_us": 0, 00:24:03.321 "io_queue_requests": 0, 00:24:03.321 "delay_cmd_submit": true, 00:24:03.321 "transport_retry_count": 4, 00:24:03.321 "bdev_retry_count": 3, 00:24:03.321 "transport_ack_timeout": 0, 00:24:03.321 "ctrlr_loss_timeout_sec": 0, 00:24:03.321 "reconnect_delay_sec": 0, 00:24:03.321 "fast_io_fail_timeout_sec": 0, 00:24:03.321 "disable_auto_failback": false, 00:24:03.321 "generate_uuids": false, 00:24:03.321 "transport_tos": 0, 00:24:03.321 "nvme_error_stat": false, 00:24:03.321 "rdma_srq_size": 0, 00:24:03.321 "io_path_stat": false, 00:24:03.321 "allow_accel_sequence": false, 00:24:03.321 "rdma_max_cq_size": 0, 00:24:03.321 "rdma_cm_event_timeout_ms": 0, 00:24:03.321 "dhchap_digests": [ 00:24:03.321 "sha256", 00:24:03.321 "sha384", 00:24:03.321 "sha512" 00:24:03.321 ], 00:24:03.321 "dhchap_dhgroups": [ 00:24:03.321 "null", 00:24:03.321 "ffdhe2048", 00:24:03.321 "ffdhe3072", 00:24:03.321 "ffdhe4096", 00:24:03.321 "ffdhe6144", 00:24:03.321 "ffdhe8192" 00:24:03.321 ] 00:24:03.321 } 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "method": "bdev_nvme_set_hotplug", 00:24:03.321 "params": { 00:24:03.321 "period_us": 100000, 00:24:03.321 "enable": false 00:24:03.321 } 00:24:03.321 }, 00:24:03.321 { 00:24:03.321 "method": "bdev_malloc_create", 00:24:03.321 "params": { 00:24:03.321 "name": "malloc0", 00:24:03.321 "num_blocks": 8192, 00:24:03.321 "block_size": 4096, 00:24:03.321 "physical_block_size": 4096, 00:24:03.321 "uuid": "872e1450-309c-46a5-ab63-a1a35ecfe681", 00:24:03.321 "optimal_io_boundary": 0, 00:24:03.321 "md_size": 0, 00:24:03.322 "dif_type": 0, 00:24:03.322 "dif_is_head_of_md": false, 00:24:03.322 "dif_pi_format": 0 00:24:03.322 } 00:24:03.322 }, 00:24:03.322 { 00:24:03.322 "method": "bdev_wait_for_examine" 00:24:03.322 } 00:24:03.322 ] 00:24:03.322 }, 00:24:03.322 { 00:24:03.322 "subsystem": "nbd", 00:24:03.322 "config": [] 00:24:03.322 }, 00:24:03.322 { 00:24:03.322 "subsystem": "scheduler", 00:24:03.322 "config": [ 00:24:03.322 { 00:24:03.322 "method": "framework_set_scheduler", 00:24:03.322 "params": { 00:24:03.322 "name": "static" 00:24:03.322 } 00:24:03.322 } 00:24:03.322 ] 00:24:03.322 }, 00:24:03.322 { 00:24:03.322 "subsystem": "nvmf", 00:24:03.322 "config": [ 00:24:03.322 { 00:24:03.322 "method": "nvmf_set_config", 00:24:03.322 "params": { 00:24:03.322 "discovery_filter": "match_any", 00:24:03.322 "admin_cmd_passthru": { 00:24:03.322 "identify_ctrlr": false 00:24:03.322 }, 00:24:03.322 "dhchap_digests": [ 00:24:03.322 "sha256", 00:24:03.322 "sha384", 00:24:03.322 "sha512" 00:24:03.322 ], 00:24:03.322 "dhchap_dhgroups": [ 00:24:03.322 "null", 00:24:03.322 "ffdhe2048", 00:24:03.322 "ffdhe3072", 00:24:03.322 "ffdhe4096", 00:24:03.322 "ffdhe6144", 00:24:03.322 "ffdhe8192" 00:24:03.322 ] 00:24:03.322 } 00:24:03.322 }, 00:24:03.322 { 00:24:03.322 "method": "nvmf_set_max_subsystems", 00:24:03.322 "params": { 00:24:03.322 "max_subsystems": 1024 00:24:03.322 } 00:24:03.322 }, 00:24:03.322 { 00:24:03.322 "method": "nvmf_set_crdt", 00:24:03.322 "params": { 00:24:03.322 "crdt1": 0, 00:24:03.322 "crdt2": 0, 00:24:03.322 "crdt3": 0 00:24:03.322 } 00:24:03.322 }, 00:24:03.322 { 00:24:03.322 "method": "nvmf_create_transport", 00:24:03.322 "params": { 00:24:03.322 "trtype": "TCP", 00:24:03.322 "max_queue_depth": 128, 00:24:03.322 "max_io_qpairs_per_ctrlr": 127, 00:24:03.322 "in_capsule_data_size": 4096, 00:24:03.322 "max_io_size": 131072, 00:24:03.322 "io_unit_size": 131072, 00:24:03.322 "max_aq_depth": 128, 00:24:03.322 "num_shared_buffers": 511, 00:24:03.322 "buf_cache_size": 4294967295, 00:24:03.322 "dif_insert_or_strip": false, 00:24:03.322 "zcopy": false, 00:24:03.322 "c2h_success": false, 00:24:03.322 "sock_priority": 0, 00:24:03.322 "abort_timeout_sec": 1, 00:24:03.322 "ack_timeout": 0, 00:24:03.322 "data_wr_pool_size": 0 00:24:03.322 } 00:24:03.322 }, 00:24:03.322 { 00:24:03.322 "method": "nvmf_create_subsystem", 00:24:03.322 "params": { 00:24:03.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.322 "allow_any_host": false, 00:24:03.322 "serial_number": "00000000000000000000", 00:24:03.322 "model_number": "SPDK bdev Controller", 00:24:03.322 "max_namespaces": 32, 00:24:03.322 "min_cntlid": 1, 00:24:03.322 "max_cntlid": 65519, 00:24:03.322 "ana_reporting": false 00:24:03.322 } 00:24:03.322 }, 00:24:03.322 { 00:24:03.322 "method": "nvmf_subsystem_add_host", 00:24:03.322 "params": { 00:24:03.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.322 "host": "nqn.2016-06.io.spdk:host1", 00:24:03.322 "psk": "key0" 00:24:03.322 } 00:24:03.322 }, 00:24:03.322 { 00:24:03.322 "method": "nvmf_subsystem_add_ns", 00:24:03.322 "params": { 00:24:03.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.322 "namespace": { 00:24:03.322 "nsid": 1, 00:24:03.322 "bdev_name": "malloc0", 00:24:03.322 "nguid": "872E1450309C46A5AB63A1A35ECFE681", 00:24:03.322 "uuid": "872e1450-309c-46a5-ab63-a1a35ecfe681", 00:24:03.322 "no_auto_visible": false 00:24:03.322 } 00:24:03.322 } 00:24:03.322 }, 00:24:03.322 { 00:24:03.322 "method": "nvmf_subsystem_add_listener", 00:24:03.322 "params": { 00:24:03.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.322 "listen_address": { 00:24:03.322 "trtype": "TCP", 00:24:03.322 "adrfam": "IPv4", 00:24:03.322 "traddr": "10.0.0.2", 00:24:03.322 "trsvcid": "4420" 00:24:03.322 }, 00:24:03.322 "secure_channel": false, 00:24:03.322 "sock_impl": "ssl" 00:24:03.322 } 00:24:03.322 } 00:24:03.322 ] 00:24:03.322 } 00:24:03.322 ] 00:24:03.322 }' 00:24:03.322 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.322 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.322 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=770802 00:24:03.322 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:03.322 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 770802 00:24:03.322 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 770802 ']' 00:24:03.322 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.322 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.322 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.322 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.322 18:44:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.580 [2024-11-17 18:44:49.924931] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:03.580 [2024-11-17 18:44:49.925065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.580 [2024-11-17 18:44:49.996644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.580 [2024-11-17 18:44:50.043290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.580 [2024-11-17 18:44:50.043349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.580 [2024-11-17 18:44:50.043374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.580 [2024-11-17 18:44:50.043385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.580 [2024-11-17 18:44:50.043395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.580 [2024-11-17 18:44:50.044013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.838 [2024-11-17 18:44:50.288277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.838 [2024-11-17 18:44:50.320311] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:03.839 [2024-11-17 18:44:50.320521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.405 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.405 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:04.405 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.405 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.405 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.663 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.663 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=770955 00:24:04.663 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 770955 /var/tmp/bdevperf.sock 00:24:04.663 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 770955 ']' 00:24:04.663 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.663 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:04.663 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.663 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:04.663 "subsystems": [ 00:24:04.663 { 00:24:04.663 "subsystem": "keyring", 00:24:04.663 "config": [ 00:24:04.663 { 00:24:04.663 "method": "keyring_file_add_key", 00:24:04.663 "params": { 00:24:04.663 "name": "key0", 00:24:04.663 "path": "/tmp/tmp.U3GzqFrrFs" 00:24:04.663 } 00:24:04.663 } 00:24:04.663 ] 00:24:04.663 }, 00:24:04.663 { 00:24:04.663 "subsystem": "iobuf", 00:24:04.663 "config": [ 00:24:04.663 { 00:24:04.663 "method": "iobuf_set_options", 00:24:04.663 "params": { 00:24:04.663 "small_pool_count": 8192, 00:24:04.663 "large_pool_count": 1024, 00:24:04.663 "small_bufsize": 8192, 00:24:04.663 "large_bufsize": 135168, 00:24:04.663 "enable_numa": false 00:24:04.663 } 00:24:04.663 } 00:24:04.663 ] 00:24:04.663 }, 00:24:04.663 { 00:24:04.663 "subsystem": "sock", 00:24:04.663 "config": [ 00:24:04.663 { 00:24:04.663 "method": "sock_set_default_impl", 00:24:04.663 "params": { 00:24:04.663 "impl_name": "posix" 00:24:04.663 } 00:24:04.663 }, 00:24:04.663 { 00:24:04.663 "method": "sock_impl_set_options", 00:24:04.663 "params": { 00:24:04.663 "impl_name": "ssl", 00:24:04.663 "recv_buf_size": 4096, 00:24:04.663 "send_buf_size": 4096, 00:24:04.663 "enable_recv_pipe": true, 00:24:04.664 "enable_quickack": false, 00:24:04.664 "enable_placement_id": 0, 00:24:04.664 "enable_zerocopy_send_server": true, 00:24:04.664 "enable_zerocopy_send_client": false, 00:24:04.664 "zerocopy_threshold": 0, 00:24:04.664 "tls_version": 0, 00:24:04.664 "enable_ktls": false 00:24:04.664 } 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "method": "sock_impl_set_options", 00:24:04.664 "params": { 00:24:04.664 "impl_name": "posix", 00:24:04.664 "recv_buf_size": 2097152, 00:24:04.664 "send_buf_size": 2097152, 00:24:04.664 "enable_recv_pipe": true, 00:24:04.664 "enable_quickack": false, 00:24:04.664 "enable_placement_id": 0, 00:24:04.664 "enable_zerocopy_send_server": true, 00:24:04.664 "enable_zerocopy_send_client": false, 00:24:04.664 "zerocopy_threshold": 0, 00:24:04.664 "tls_version": 0, 00:24:04.664 "enable_ktls": false 00:24:04.664 } 00:24:04.664 } 00:24:04.664 ] 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "subsystem": "vmd", 00:24:04.664 "config": [] 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "subsystem": "accel", 00:24:04.664 "config": [ 00:24:04.664 { 00:24:04.664 "method": "accel_set_options", 00:24:04.664 "params": { 00:24:04.664 "small_cache_size": 128, 00:24:04.664 "large_cache_size": 16, 00:24:04.664 "task_count": 2048, 00:24:04.664 "sequence_count": 2048, 00:24:04.664 "buf_count": 2048 00:24:04.664 } 00:24:04.664 } 00:24:04.664 ] 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "subsystem": "bdev", 00:24:04.664 "config": [ 00:24:04.664 { 00:24:04.664 "method": "bdev_set_options", 00:24:04.664 "params": { 00:24:04.664 "bdev_io_pool_size": 65535, 00:24:04.664 "bdev_io_cache_size": 256, 00:24:04.664 "bdev_auto_examine": true, 00:24:04.664 "iobuf_small_cache_size": 128, 00:24:04.664 "iobuf_large_cache_size": 16 00:24:04.664 } 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "method": "bdev_raid_set_options", 00:24:04.664 "params": { 00:24:04.664 "process_window_size_kb": 1024, 00:24:04.664 "process_max_bandwidth_mb_sec": 0 00:24:04.664 } 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "method": "bdev_iscsi_set_options", 00:24:04.664 "params": { 00:24:04.664 "timeout_sec": 30 00:24:04.664 } 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "method": "bdev_nvme_set_options", 00:24:04.664 "params": { 00:24:04.664 "action_on_timeout": "none", 00:24:04.664 "timeout_us": 0, 00:24:04.664 "timeout_admin_us": 0, 00:24:04.664 "keep_alive_timeout_ms": 10000, 00:24:04.664 "arbitration_burst": 0, 00:24:04.664 "low_priority_weight": 0, 00:24:04.664 "medium_priority_weight": 0, 00:24:04.664 "high_priority_weight": 0, 00:24:04.664 "nvme_adminq_poll_period_us": 10000, 00:24:04.664 "nvme_ioq_poll_period_us": 0, 00:24:04.664 "io_queue_requests": 512, 00:24:04.664 "delay_cmd_submit": true, 00:24:04.664 "transport_retry_count": 4, 00:24:04.664 "bdev_retry_count": 3, 00:24:04.664 "transport_ack_timeout": 0, 00:24:04.664 "ctrlr_loss_timeout_sec": 0, 00:24:04.664 "reconnect_delay_sec": 0, 00:24:04.664 "fast_io_fail_timeout_sec": 0, 00:24:04.664 "disable_auto_failback": false, 00:24:04.664 "generate_uuids": false, 00:24:04.664 "transport_tos": 0, 00:24:04.664 "nvme_error_stat": false, 00:24:04.664 "rdma_srq_size": 0, 00:24:04.664 "io_path_stat": false, 00:24:04.664 "allow_accel_sequence": false, 00:24:04.664 "rdma_max_cq_size": 0, 00:24:04.664 "rdma_cm_event_timeout_ms": 0, 00:24:04.664 "dhchap_digests": [ 00:24:04.664 "sha256", 00:24:04.664 "sha384", 00:24:04.664 "sha512" 00:24:04.664 ], 00:24:04.664 "dhchap_dhgroups": [ 00:24:04.664 "null", 00:24:04.664 "ffdhe2048", 00:24:04.664 "ffdhe3072", 00:24:04.664 "ffdhe4096", 00:24:04.664 "ffdhe6144", 00:24:04.664 "ffdhe8192" 00:24:04.664 ] 00:24:04.664 } 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "method": "bdev_nvme_attach_controller", 00:24:04.664 "params": { 00:24:04.664 "name": "nvme0", 00:24:04.664 "trtype": "TCP", 00:24:04.664 "adrfam": "IPv4", 00:24:04.664 "traddr": "10.0.0.2", 00:24:04.664 "trsvcid": "4420", 00:24:04.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.664 "prchk_reftag": false, 00:24:04.664 "prchk_guard": false, 00:24:04.664 "ctrlr_loss_timeout_sec": 0, 00:24:04.664 "reconnect_delay_sec": 0, 00:24:04.664 "fast_io_fail_timeout_sec": 0, 00:24:04.664 "psk": "key0", 00:24:04.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.664 "hdgst": false, 00:24:04.664 "ddgst": false, 00:24:04.664 "multipath": "multipath" 00:24:04.664 } 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "method": "bdev_nvme_set_hotplug", 00:24:04.664 "params": { 00:24:04.664 "period_us": 100000, 00:24:04.664 "enable": false 00:24:04.664 } 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "method": "bdev_enable_histogram", 00:24:04.664 "params": { 00:24:04.664 "name": "nvme0n1", 00:24:04.664 "enable": true 00:24:04.664 } 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "method": "bdev_wait_for_examine" 00:24:04.664 } 00:24:04.664 ] 00:24:04.664 }, 00:24:04.664 { 00:24:04.664 "subsystem": "nbd", 00:24:04.664 "config": [] 00:24:04.664 } 00:24:04.664 ] 00:24:04.664 }' 00:24:04.664 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.664 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.664 18:44:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.664 [2024-11-17 18:44:51.046519] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:04.664 [2024-11-17 18:44:51.046618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770955 ] 00:24:04.664 [2024-11-17 18:44:51.113888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.664 [2024-11-17 18:44:51.159822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.923 [2024-11-17 18:44:51.338100] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:04.923 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.923 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:04.923 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.923 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:05.181 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.181 18:44:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.439 Running I/O for 1 seconds... 00:24:06.373 3665.00 IOPS, 14.32 MiB/s 00:24:06.373 Latency(us) 00:24:06.373 [2024-11-17T17:44:52.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.373 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:06.373 Verification LBA range: start 0x0 length 0x2000 00:24:06.373 nvme0n1 : 1.02 3703.10 14.47 0.00 0.00 34188.09 6310.87 25437.68 00:24:06.373 [2024-11-17T17:44:52.949Z] =================================================================================================================== 00:24:06.373 [2024-11-17T17:44:52.949Z] Total : 3703.10 14.47 0.00 0.00 34188.09 6310.87 25437.68 00:24:06.373 { 00:24:06.373 "results": [ 00:24:06.373 { 00:24:06.373 "job": "nvme0n1", 00:24:06.373 "core_mask": "0x2", 00:24:06.373 "workload": "verify", 00:24:06.373 "status": "finished", 00:24:06.373 "verify_range": { 00:24:06.373 "start": 0, 00:24:06.373 "length": 8192 00:24:06.373 }, 00:24:06.373 "queue_depth": 128, 00:24:06.373 "io_size": 4096, 00:24:06.373 "runtime": 1.024278, 00:24:06.373 "iops": 3703.096229734506, 00:24:06.373 "mibps": 14.465219647400414, 00:24:06.374 "io_failed": 0, 00:24:06.374 "io_timeout": 0, 00:24:06.374 "avg_latency_us": 34188.09065119958, 00:24:06.374 "min_latency_us": 6310.874074074074, 00:24:06.374 "max_latency_us": 25437.677037037036 00:24:06.374 } 00:24:06.374 ], 00:24:06.374 "core_count": 1 00:24:06.374 } 00:24:06.374 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:06.374 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:06.374 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:06.374 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:06.374 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:06.374 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:06.374 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:06.374 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:06.374 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:06.374 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:06.374 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:06.374 nvmf_trace.0 00:24:06.632 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:06.632 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 770955 00:24:06.632 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 770955 ']' 00:24:06.632 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 770955 00:24:06.632 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:06.632 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.632 18:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770955 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770955' 00:24:06.632 killing process with pid 770955 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 770955 00:24:06.632 Received shutdown signal, test time was about 1.000000 seconds 00:24:06.632 00:24:06.632 Latency(us) 00:24:06.632 [2024-11-17T17:44:53.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.632 [2024-11-17T17:44:53.208Z] =================================================================================================================== 00:24:06.632 [2024-11-17T17:44:53.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 770955 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:06.632 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:06.632 rmmod nvme_tcp 00:24:06.891 rmmod nvme_fabrics 00:24:06.891 rmmod nvme_keyring 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 770802 ']' 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 770802 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 770802 ']' 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 770802 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770802 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770802' 00:24:06.891 killing process with pid 770802 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 770802 00:24:06.891 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 770802 00:24:07.150 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:07.150 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:07.150 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:07.150 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:07.150 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:07.150 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:07.150 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:07.150 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:07.150 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:07.150 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.151 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.151 18:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.059 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:09.059 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.LWo2UrEfvd /tmp/tmp.APl1JKDoJd /tmp/tmp.U3GzqFrrFs 00:24:09.059 00:24:09.059 real 1m22.115s 00:24:09.059 user 2m15.618s 00:24:09.059 sys 0m25.478s 00:24:09.059 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.059 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.059 ************************************ 00:24:09.059 END TEST nvmf_tls 00:24:09.059 ************************************ 00:24:09.059 18:44:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:09.059 18:44:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:09.059 18:44:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.059 18:44:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:09.059 ************************************ 00:24:09.059 START TEST nvmf_fips 00:24:09.059 ************************************ 00:24:09.059 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:09.318 * Looking for test storage... 00:24:09.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:09.318 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:09.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.319 --rc genhtml_branch_coverage=1 00:24:09.319 --rc genhtml_function_coverage=1 00:24:09.319 --rc genhtml_legend=1 00:24:09.319 --rc geninfo_all_blocks=1 00:24:09.319 --rc geninfo_unexecuted_blocks=1 00:24:09.319 00:24:09.319 ' 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:09.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.319 --rc genhtml_branch_coverage=1 00:24:09.319 --rc genhtml_function_coverage=1 00:24:09.319 --rc genhtml_legend=1 00:24:09.319 --rc geninfo_all_blocks=1 00:24:09.319 --rc geninfo_unexecuted_blocks=1 00:24:09.319 00:24:09.319 ' 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:09.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.319 --rc genhtml_branch_coverage=1 00:24:09.319 --rc genhtml_function_coverage=1 00:24:09.319 --rc genhtml_legend=1 00:24:09.319 --rc geninfo_all_blocks=1 00:24:09.319 --rc geninfo_unexecuted_blocks=1 00:24:09.319 00:24:09.319 ' 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:09.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.319 --rc genhtml_branch_coverage=1 00:24:09.319 --rc genhtml_function_coverage=1 00:24:09.319 --rc genhtml_legend=1 00:24:09.319 --rc geninfo_all_blocks=1 00:24:09.319 --rc geninfo_unexecuted_blocks=1 00:24:09.319 00:24:09.319 ' 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:09.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.319 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:09.320 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:09.579 Error setting digest 00:24:09.579 4022CF6AAD7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:09.579 4022CF6AAD7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:09.579 18:44:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:12.114 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:12.114 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:12.114 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:12.114 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:12.114 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:24:12.115 00:24:12.115 --- 10.0.0.2 ping statistics --- 00:24:12.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.115 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:24:12.115 00:24:12.115 --- 10.0.0.1 ping statistics --- 00:24:12.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.115 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=773206 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 773206 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 773206 ']' 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.115 [2024-11-17 18:44:58.347383] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:12.115 [2024-11-17 18:44:58.347475] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.115 [2024-11-17 18:44:58.420488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.115 [2024-11-17 18:44:58.466766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.115 [2024-11-17 18:44:58.466818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.115 [2024-11-17 18:44:58.466832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.115 [2024-11-17 18:44:58.466844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.115 [2024-11-17 18:44:58.466853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.115 [2024-11-17 18:44:58.467408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.aWl 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.aWl 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.aWl 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.aWl 00:24:12.115 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.374 [2024-11-17 18:44:58.906985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.374 [2024-11-17 18:44:58.922989] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:12.374 [2024-11-17 18:44:58.923210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.631 malloc0 00:24:12.631 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.631 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=773359 00:24:12.631 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.631 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 773359 /var/tmp/bdevperf.sock 00:24:12.631 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 773359 ']' 00:24:12.632 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.632 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.632 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.632 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.632 18:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.632 [2024-11-17 18:44:59.058390] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:12.632 [2024-11-17 18:44:59.058480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773359 ] 00:24:12.632 [2024-11-17 18:44:59.125021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.632 [2024-11-17 18:44:59.169710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.889 18:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.889 18:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:12.889 18:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.aWl 00:24:13.145 18:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:13.402 [2024-11-17 18:44:59.787871] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.403 TLSTESTn1 00:24:13.403 18:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:13.661 Running I/O for 10 seconds... 00:24:15.529 3302.00 IOPS, 12.90 MiB/s [2024-11-17T17:45:03.038Z] 3383.50 IOPS, 13.22 MiB/s [2024-11-17T17:45:04.447Z] 3415.33 IOPS, 13.34 MiB/s [2024-11-17T17:45:05.046Z] 3444.25 IOPS, 13.45 MiB/s [2024-11-17T17:45:06.420Z] 3450.60 IOPS, 13.48 MiB/s [2024-11-17T17:45:07.354Z] 3472.50 IOPS, 13.56 MiB/s [2024-11-17T17:45:08.287Z] 3476.29 IOPS, 13.58 MiB/s [2024-11-17T17:45:09.220Z] 3489.88 IOPS, 13.63 MiB/s [2024-11-17T17:45:10.154Z] 3499.22 IOPS, 13.67 MiB/s [2024-11-17T17:45:10.154Z] 3491.40 IOPS, 13.64 MiB/s 00:24:23.578 Latency(us) 00:24:23.578 [2024-11-17T17:45:10.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.578 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:23.578 Verification LBA range: start 0x0 length 0x2000 00:24:23.578 TLSTESTn1 : 10.03 3492.42 13.64 0.00 0.00 36569.33 9660.49 47768.46 00:24:23.578 [2024-11-17T17:45:10.154Z] =================================================================================================================== 00:24:23.578 [2024-11-17T17:45:10.154Z] Total : 3492.42 13.64 0.00 0.00 36569.33 9660.49 47768.46 00:24:23.578 { 00:24:23.578 "results": [ 00:24:23.578 { 00:24:23.578 "job": "TLSTESTn1", 00:24:23.578 "core_mask": "0x4", 00:24:23.578 "workload": "verify", 00:24:23.578 "status": "finished", 00:24:23.578 "verify_range": { 00:24:23.578 "start": 0, 00:24:23.578 "length": 8192 00:24:23.578 }, 00:24:23.578 "queue_depth": 128, 00:24:23.578 "io_size": 4096, 00:24:23.578 "runtime": 10.03317, 00:24:23.578 "iops": 3492.4156572648526, 00:24:23.578 "mibps": 13.64224866119083, 00:24:23.578 "io_failed": 0, 00:24:23.578 "io_timeout": 0, 00:24:23.578 "avg_latency_us": 36569.32559986471, 00:24:23.578 "min_latency_us": 9660.491851851852, 00:24:23.578 "max_latency_us": 47768.462222222224 00:24:23.578 } 00:24:23.578 ], 00:24:23.578 "core_count": 1 00:24:23.578 } 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:23.578 nvmf_trace.0 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 773359 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 773359 ']' 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 773359 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.578 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773359 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773359' 00:24:23.837 killing process with pid 773359 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 773359 00:24:23.837 Received shutdown signal, test time was about 10.000000 seconds 00:24:23.837 00:24:23.837 Latency(us) 00:24:23.837 [2024-11-17T17:45:10.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.837 [2024-11-17T17:45:10.413Z] =================================================================================================================== 00:24:23.837 [2024-11-17T17:45:10.413Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 773359 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:23.837 rmmod nvme_tcp 00:24:23.837 rmmod nvme_fabrics 00:24:23.837 rmmod nvme_keyring 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 773206 ']' 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 773206 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 773206 ']' 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 773206 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.837 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773206 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773206' 00:24:24.096 killing process with pid 773206 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 773206 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 773206 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.096 18:45:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.635 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.635 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.aWl 00:24:26.635 00:24:26.635 real 0m17.093s 00:24:26.635 user 0m22.480s 00:24:26.635 sys 0m5.554s 00:24:26.635 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.635 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:26.635 ************************************ 00:24:26.635 END TEST nvmf_fips 00:24:26.635 ************************************ 00:24:26.635 18:45:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:26.635 18:45:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:26.636 ************************************ 00:24:26.636 START TEST nvmf_control_msg_list 00:24:26.636 ************************************ 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:26.636 * Looking for test storage... 00:24:26.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:26.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.636 --rc genhtml_branch_coverage=1 00:24:26.636 --rc genhtml_function_coverage=1 00:24:26.636 --rc genhtml_legend=1 00:24:26.636 --rc geninfo_all_blocks=1 00:24:26.636 --rc geninfo_unexecuted_blocks=1 00:24:26.636 00:24:26.636 ' 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:26.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.636 --rc genhtml_branch_coverage=1 00:24:26.636 --rc genhtml_function_coverage=1 00:24:26.636 --rc genhtml_legend=1 00:24:26.636 --rc geninfo_all_blocks=1 00:24:26.636 --rc geninfo_unexecuted_blocks=1 00:24:26.636 00:24:26.636 ' 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:26.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.636 --rc genhtml_branch_coverage=1 00:24:26.636 --rc genhtml_function_coverage=1 00:24:26.636 --rc genhtml_legend=1 00:24:26.636 --rc geninfo_all_blocks=1 00:24:26.636 --rc geninfo_unexecuted_blocks=1 00:24:26.636 00:24:26.636 ' 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:26.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.636 --rc genhtml_branch_coverage=1 00:24:26.636 --rc genhtml_function_coverage=1 00:24:26.636 --rc genhtml_legend=1 00:24:26.636 --rc geninfo_all_blocks=1 00:24:26.636 --rc geninfo_unexecuted_blocks=1 00:24:26.636 00:24:26.636 ' 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.636 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.637 18:45:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.539 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:28.540 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:28.540 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:28.540 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:28.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.540 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:24:28.799 00:24:28.799 --- 10.0.0.2 ping statistics --- 00:24:28.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.799 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:24:28.799 00:24:28.799 --- 10.0.0.1 ping statistics --- 00:24:28.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.799 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:28.799 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.800 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.800 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:28.800 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=777238 00:24:28.800 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:28.800 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 777238 00:24:28.800 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 777238 ']' 00:24:28.800 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.800 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.800 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.800 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.800 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:28.800 [2024-11-17 18:45:15.230064] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:28.800 [2024-11-17 18:45:15.230158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.800 [2024-11-17 18:45:15.302790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.800 [2024-11-17 18:45:15.344883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.800 [2024-11-17 18:45:15.344945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.800 [2024-11-17 18:45:15.344965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.800 [2024-11-17 18:45:15.344982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.800 [2024-11-17 18:45:15.344996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.800 [2024-11-17 18:45:15.345617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.058 [2024-11-17 18:45:15.485830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.058 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.058 Malloc0 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.059 [2024-11-17 18:45:15.525519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=777260 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=777261 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=777262 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 777260 00:24:29.059 18:45:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:29.059 [2024-11-17 18:45:15.584084] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:29.059 [2024-11-17 18:45:15.594356] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:29.059 [2024-11-17 18:45:15.594770] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:30.431 Initializing NVMe Controllers 00:24:30.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:30.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:30.431 Initialization complete. Launching workers. 00:24:30.431 ======================================================== 00:24:30.431 Latency(us) 00:24:30.431 Device Information : IOPS MiB/s Average min max 00:24:30.431 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3919.00 15.31 254.69 177.61 582.33 00:24:30.431 ======================================================== 00:24:30.431 Total : 3919.00 15.31 254.69 177.61 582.33 00:24:30.431 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 777261 00:24:30.431 Initializing NVMe Controllers 00:24:30.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:30.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:30.431 Initialization complete. Launching workers. 00:24:30.431 ======================================================== 00:24:30.431 Latency(us) 00:24:30.431 Device Information : IOPS MiB/s Average min max 00:24:30.431 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3707.00 14.48 269.37 158.26 40866.45 00:24:30.431 ======================================================== 00:24:30.431 Total : 3707.00 14.48 269.37 158.26 40866.45 00:24:30.431 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 777262 00:24:30.431 Initializing NVMe Controllers 00:24:30.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:30.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:30.431 Initialization complete. Launching workers. 00:24:30.431 ======================================================== 00:24:30.431 Latency(us) 00:24:30.431 Device Information : IOPS MiB/s Average min max 00:24:30.431 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41111.52 40150.64 41970.06 00:24:30.431 ======================================================== 00:24:30.431 Total : 25.00 0.10 41111.52 40150.64 41970.06 00:24:30.431 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:30.431 rmmod nvme_tcp 00:24:30.431 rmmod nvme_fabrics 00:24:30.431 rmmod nvme_keyring 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 777238 ']' 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 777238 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 777238 ']' 00:24:30.431 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 777238 00:24:30.432 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:30.432 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.432 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777238 00:24:30.432 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:30.432 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:30.432 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777238' 00:24:30.432 killing process with pid 777238 00:24:30.432 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 777238 00:24:30.432 18:45:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 777238 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.690 18:45:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.592 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:32.592 00:24:32.592 real 0m6.310s 00:24:32.592 user 0m5.428s 00:24:32.592 sys 0m2.649s 00:24:32.592 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.592 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.592 ************************************ 00:24:32.592 END TEST nvmf_control_msg_list 00:24:32.592 ************************************ 00:24:32.592 18:45:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:32.592 18:45:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:32.592 18:45:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.592 18:45:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:32.592 ************************************ 00:24:32.592 START TEST nvmf_wait_for_buf 00:24:32.592 ************************************ 00:24:32.592 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:32.850 * Looking for test storage... 00:24:32.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:32.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.850 --rc genhtml_branch_coverage=1 00:24:32.850 --rc genhtml_function_coverage=1 00:24:32.850 --rc genhtml_legend=1 00:24:32.850 --rc geninfo_all_blocks=1 00:24:32.850 --rc geninfo_unexecuted_blocks=1 00:24:32.850 00:24:32.850 ' 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:32.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.850 --rc genhtml_branch_coverage=1 00:24:32.850 --rc genhtml_function_coverage=1 00:24:32.850 --rc genhtml_legend=1 00:24:32.850 --rc geninfo_all_blocks=1 00:24:32.850 --rc geninfo_unexecuted_blocks=1 00:24:32.850 00:24:32.850 ' 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:32.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.850 --rc genhtml_branch_coverage=1 00:24:32.850 --rc genhtml_function_coverage=1 00:24:32.850 --rc genhtml_legend=1 00:24:32.850 --rc geninfo_all_blocks=1 00:24:32.850 --rc geninfo_unexecuted_blocks=1 00:24:32.850 00:24:32.850 ' 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:32.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.850 --rc genhtml_branch_coverage=1 00:24:32.850 --rc genhtml_function_coverage=1 00:24:32.850 --rc genhtml_legend=1 00:24:32.850 --rc geninfo_all_blocks=1 00:24:32.850 --rc geninfo_unexecuted_blocks=1 00:24:32.850 00:24:32.850 ' 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.850 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:32.851 18:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:35.385 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:35.386 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:35.386 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:35.386 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:35.386 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:35.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:24:35.386 00:24:35.386 --- 10.0.0.2 ping statistics --- 00:24:35.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.386 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:24:35.386 00:24:35.386 --- 10.0.0.1 ping statistics --- 00:24:35.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.386 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:35.386 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:35.387 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:35.387 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.387 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=779456 00:24:35.387 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:35.387 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 779456 00:24:35.387 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 779456 ']' 00:24:35.387 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.387 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.387 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.387 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.387 18:45:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.387 [2024-11-17 18:45:21.801299] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:24:35.387 [2024-11-17 18:45:21.801384] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.387 [2024-11-17 18:45:21.876032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.387 [2024-11-17 18:45:21.920285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.387 [2024-11-17 18:45:21.920330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.387 [2024-11-17 18:45:21.920350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.387 [2024-11-17 18:45:21.920376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.387 [2024-11-17 18:45:21.920406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.387 [2024-11-17 18:45:21.921061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.645 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.645 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:35.645 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.645 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.645 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.645 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.645 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:35.645 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.646 Malloc0 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.646 [2024-11-17 18:45:22.159996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.646 [2024-11-17 18:45:22.184197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.646 18:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:35.903 [2024-11-17 18:45:22.271796] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:37.278 Initializing NVMe Controllers 00:24:37.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:37.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:37.278 Initialization complete. Launching workers. 00:24:37.278 ======================================================== 00:24:37.278 Latency(us) 00:24:37.278 Device Information : IOPS MiB/s Average min max 00:24:37.278 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 114.00 14.25 36541.25 8001.55 71842.02 00:24:37.278 ======================================================== 00:24:37.278 Total : 114.00 14.25 36541.25 8001.55 71842.02 00:24:37.278 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1798 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1798 -eq 0 ]] 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.278 rmmod nvme_tcp 00:24:37.278 rmmod nvme_fabrics 00:24:37.278 rmmod nvme_keyring 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 779456 ']' 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 779456 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 779456 ']' 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 779456 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.278 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779456 00:24:37.539 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:37.539 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:37.539 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779456' 00:24:37.539 killing process with pid 779456 00:24:37.539 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 779456 00:24:37.539 18:45:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 779456 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.539 18:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.079 00:24:40.079 real 0m6.981s 00:24:40.079 user 0m3.194s 00:24:40.079 sys 0m2.126s 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.079 ************************************ 00:24:40.079 END TEST nvmf_wait_for_buf 00:24:40.079 ************************************ 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:40.079 ************************************ 00:24:40.079 START TEST nvmf_fuzz 00:24:40.079 ************************************ 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:40.079 * Looking for test storage... 00:24:40.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:40.079 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.080 --rc genhtml_branch_coverage=1 00:24:40.080 --rc genhtml_function_coverage=1 00:24:40.080 --rc genhtml_legend=1 00:24:40.080 --rc geninfo_all_blocks=1 00:24:40.080 --rc geninfo_unexecuted_blocks=1 00:24:40.080 00:24:40.080 ' 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.080 --rc genhtml_branch_coverage=1 00:24:40.080 --rc genhtml_function_coverage=1 00:24:40.080 --rc genhtml_legend=1 00:24:40.080 --rc geninfo_all_blocks=1 00:24:40.080 --rc geninfo_unexecuted_blocks=1 00:24:40.080 00:24:40.080 ' 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.080 --rc genhtml_branch_coverage=1 00:24:40.080 --rc genhtml_function_coverage=1 00:24:40.080 --rc genhtml_legend=1 00:24:40.080 --rc geninfo_all_blocks=1 00:24:40.080 --rc geninfo_unexecuted_blocks=1 00:24:40.080 00:24:40.080 ' 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.080 --rc genhtml_branch_coverage=1 00:24:40.080 --rc genhtml_function_coverage=1 00:24:40.080 --rc genhtml_legend=1 00:24:40.080 --rc geninfo_all_blocks=1 00:24:40.080 --rc geninfo_unexecuted_blocks=1 00:24:40.080 00:24:40.080 ' 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.080 18:45:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.986 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:41.987 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:41.987 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:41.987 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:41.987 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.987 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:42.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:24:42.245 00:24:42.245 --- 10.0.0.2 ping statistics --- 00:24:42.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.245 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:24:42.245 00:24:42.245 --- 10.0.0.1 ping statistics --- 00:24:42.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.245 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.245 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=781675 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 781675 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 781675 ']' 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.246 18:45:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.504 Malloc0 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.504 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.762 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.762 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:42.762 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.762 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.762 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.762 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.762 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.762 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:42.762 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.762 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:42.762 18:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:14.822 Fuzzing completed. Shutting down the fuzz application 00:25:14.822 00:25:14.822 Dumping successful admin opcodes: 00:25:14.822 8, 9, 10, 24, 00:25:14.822 Dumping successful io opcodes: 00:25:14.822 0, 9, 00:25:14.822 NS: 0x2000008eff00 I/O qp, Total commands completed: 499651, total successful commands: 2882, random_seed: 260351232 00:25:14.822 NS: 0x2000008eff00 admin qp, Total commands completed: 60272, total successful commands: 477, random_seed: 1542301568 00:25:14.822 18:45:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:14.822 Fuzzing completed. Shutting down the fuzz application 00:25:14.822 00:25:14.822 Dumping successful admin opcodes: 00:25:14.822 24, 00:25:14.822 Dumping successful io opcodes: 00:25:14.822 00:25:14.823 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 61314276 00:25:14.823 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 61429956 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:14.823 rmmod nvme_tcp 00:25:14.823 rmmod nvme_fabrics 00:25:14.823 rmmod nvme_keyring 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 781675 ']' 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 781675 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 781675 ']' 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 781675 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 781675 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 781675' 00:25:14.823 killing process with pid 781675 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 781675 00:25:14.823 18:46:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 781675 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.823 18:46:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.730 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:16.730 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:16.730 00:25:16.730 real 0m37.042s 00:25:16.730 user 0m50.610s 00:25:16.730 sys 0m15.260s 00:25:16.730 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.731 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:16.731 ************************************ 00:25:16.731 END TEST nvmf_fuzz 00:25:16.731 ************************************ 00:25:16.731 18:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:16.731 18:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:16.731 18:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:16.731 18:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:16.731 ************************************ 00:25:16.731 START TEST nvmf_multiconnection 00:25:16.731 ************************************ 00:25:16.731 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:16.989 * Looking for test storage... 00:25:16.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:16.989 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:16.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.990 --rc genhtml_branch_coverage=1 00:25:16.990 --rc genhtml_function_coverage=1 00:25:16.990 --rc genhtml_legend=1 00:25:16.990 --rc geninfo_all_blocks=1 00:25:16.990 --rc geninfo_unexecuted_blocks=1 00:25:16.990 00:25:16.990 ' 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:16.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.990 --rc genhtml_branch_coverage=1 00:25:16.990 --rc genhtml_function_coverage=1 00:25:16.990 --rc genhtml_legend=1 00:25:16.990 --rc geninfo_all_blocks=1 00:25:16.990 --rc geninfo_unexecuted_blocks=1 00:25:16.990 00:25:16.990 ' 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:16.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.990 --rc genhtml_branch_coverage=1 00:25:16.990 --rc genhtml_function_coverage=1 00:25:16.990 --rc genhtml_legend=1 00:25:16.990 --rc geninfo_all_blocks=1 00:25:16.990 --rc geninfo_unexecuted_blocks=1 00:25:16.990 00:25:16.990 ' 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:16.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.990 --rc genhtml_branch_coverage=1 00:25:16.990 --rc genhtml_function_coverage=1 00:25:16.990 --rc genhtml_legend=1 00:25:16.990 --rc geninfo_all_blocks=1 00:25:16.990 --rc geninfo_unexecuted_blocks=1 00:25:16.990 00:25:16.990 ' 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:16.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:16.990 18:46:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:19.521 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:19.521 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.521 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:19.522 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:19.522 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:19.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:25:19.522 00:25:19.522 --- 10.0.0.2 ping statistics --- 00:25:19.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.522 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:25:19.522 00:25:19.522 --- 10.0.0.1 ping statistics --- 00:25:19.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.522 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=787288 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 787288 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 787288 ']' 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.522 18:46:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.522 [2024-11-17 18:46:05.799237] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:25:19.522 [2024-11-17 18:46:05.799345] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.522 [2024-11-17 18:46:05.873038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:19.522 [2024-11-17 18:46:05.920496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.522 [2024-11-17 18:46:05.920566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.522 [2024-11-17 18:46:05.920580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.522 [2024-11-17 18:46:05.920591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.522 [2024-11-17 18:46:05.920601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.522 [2024-11-17 18:46:05.922220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.522 [2024-11-17 18:46:05.922286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.522 [2024-11-17 18:46:05.922351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:19.522 [2024-11-17 18:46:05.922354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.522 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.522 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:19.522 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:19.522 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:19.522 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.522 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.522 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:19.523 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.523 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.523 [2024-11-17 18:46:06.076387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.523 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.523 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:19.523 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.523 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:19.523 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.523 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.781 Malloc1 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.781 [2024-11-17 18:46:06.143307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.781 Malloc2 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.781 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.781 Malloc3 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 Malloc4 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 Malloc5 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.782 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.040 Malloc6 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 Malloc7 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 Malloc8 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 Malloc9 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 Malloc10 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.041 Malloc11 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.041 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.299 18:46:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:20.864 18:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:20.864 18:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:20.865 18:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:20.865 18:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:20.865 18:46:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:22.840 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:22.840 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:22.840 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:22.840 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:22.840 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:22.840 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:22.840 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.840 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:23.419 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:23.419 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:23.419 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.419 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:23.419 18:46:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:25.947 18:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:25.947 18:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:25.947 18:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:25.947 18:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:25.947 18:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.947 18:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:25.947 18:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.947 18:46:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:26.205 18:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:26.205 18:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:26.205 18:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.205 18:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:26.205 18:46:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:28.102 18:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:28.102 18:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:28.102 18:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:28.102 18:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:28.102 18:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.102 18:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:28.102 18:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.102 18:46:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:29.036 18:46:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:29.036 18:46:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:29.036 18:46:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.036 18:46:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:29.036 18:46:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:30.935 18:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:30.935 18:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:30.935 18:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:30.935 18:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:30.935 18:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:30.935 18:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:30.935 18:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.935 18:46:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:31.868 18:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:31.868 18:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:31.868 18:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.868 18:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:31.868 18:46:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:33.775 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:33.776 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:33.776 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:33.776 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:33.776 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.776 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:33.776 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.776 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:34.709 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:34.709 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:34.709 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.709 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:34.709 18:46:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:36.607 18:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:36.607 18:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:36.607 18:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:36.607 18:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:36.607 18:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:36.607 18:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:36.607 18:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.607 18:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:37.173 18:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:37.173 18:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:37.173 18:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:37.173 18:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:37.173 18:46:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:39.699 18:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:39.699 18:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:39.699 18:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:39.699 18:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:39.699 18:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.699 18:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:39.699 18:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.699 18:46:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:40.264 18:46:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:40.264 18:46:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:40.264 18:46:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:40.264 18:46:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:40.264 18:46:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:42.162 18:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:42.162 18:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:42.162 18:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:42.162 18:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:42.162 18:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.162 18:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:42.162 18:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.162 18:46:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:43.095 18:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:43.095 18:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:43.095 18:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:43.095 18:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:43.095 18:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:44.993 18:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:44.993 18:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:44.993 18:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:44.993 18:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:44.993 18:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.993 18:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:44.993 18:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.993 18:46:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:46.362 18:46:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:46.362 18:46:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:46.362 18:46:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:46.362 18:46:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:46.362 18:46:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:48.261 18:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:48.261 18:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:48.261 18:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:48.261 18:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:48.261 18:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:48.261 18:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:48.261 18:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.261 18:46:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:48.826 18:46:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:48.826 18:46:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:48.826 18:46:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.826 18:46:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:48.826 18:46:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:51.384 18:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:51.384 18:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:51.384 18:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:51.384 18:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:51.384 18:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:51.384 18:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:51.384 18:46:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:51.384 [global] 00:25:51.384 thread=1 00:25:51.384 invalidate=1 00:25:51.384 rw=read 00:25:51.384 time_based=1 00:25:51.384 runtime=10 00:25:51.384 ioengine=libaio 00:25:51.384 direct=1 00:25:51.384 bs=262144 00:25:51.384 iodepth=64 00:25:51.384 norandommap=1 00:25:51.384 numjobs=1 00:25:51.384 00:25:51.384 [job0] 00:25:51.384 filename=/dev/nvme0n1 00:25:51.384 [job1] 00:25:51.384 filename=/dev/nvme10n1 00:25:51.384 [job2] 00:25:51.384 filename=/dev/nvme1n1 00:25:51.384 [job3] 00:25:51.384 filename=/dev/nvme2n1 00:25:51.384 [job4] 00:25:51.384 filename=/dev/nvme3n1 00:25:51.384 [job5] 00:25:51.384 filename=/dev/nvme4n1 00:25:51.384 [job6] 00:25:51.384 filename=/dev/nvme5n1 00:25:51.384 [job7] 00:25:51.384 filename=/dev/nvme6n1 00:25:51.384 [job8] 00:25:51.384 filename=/dev/nvme7n1 00:25:51.384 [job9] 00:25:51.384 filename=/dev/nvme8n1 00:25:51.384 [job10] 00:25:51.384 filename=/dev/nvme9n1 00:25:51.384 Could not set queue depth (nvme0n1) 00:25:51.384 Could not set queue depth (nvme10n1) 00:25:51.384 Could not set queue depth (nvme1n1) 00:25:51.384 Could not set queue depth (nvme2n1) 00:25:51.384 Could not set queue depth (nvme3n1) 00:25:51.384 Could not set queue depth (nvme4n1) 00:25:51.384 Could not set queue depth (nvme5n1) 00:25:51.384 Could not set queue depth (nvme6n1) 00:25:51.384 Could not set queue depth (nvme7n1) 00:25:51.384 Could not set queue depth (nvme8n1) 00:25:51.384 Could not set queue depth (nvme9n1) 00:25:51.384 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.384 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.384 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.384 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.384 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.384 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.384 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.384 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.384 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.384 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.384 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.384 fio-3.35 00:25:51.384 Starting 11 threads 00:26:03.592 00:26:03.592 job0: (groupid=0, jobs=1): err= 0: pid=791531: Sun Nov 17 18:46:48 2024 00:26:03.592 read: IOPS=164, BW=41.2MiB/s (43.3MB/s)(418MiB/10140msec) 00:26:03.592 slat (usec): min=10, max=295594, avg=4828.46, stdev=22646.08 00:26:03.592 clat (msec): min=29, max=1198, avg=382.80, stdev=265.35 00:26:03.592 lat (msec): min=29, max=1198, avg=387.63, stdev=269.66 00:26:03.592 clat percentiles (msec): 00:26:03.592 | 1.00th=[ 54], 5.00th=[ 100], 10.00th=[ 120], 20.00th=[ 150], 00:26:03.592 | 30.00th=[ 180], 40.00th=[ 207], 50.00th=[ 279], 60.00th=[ 418], 00:26:03.592 | 70.00th=[ 498], 80.00th=[ 642], 90.00th=[ 810], 95.00th=[ 894], 00:26:03.592 | 99.00th=[ 969], 99.50th=[ 1028], 99.90th=[ 1083], 99.95th=[ 1200], 00:26:03.592 | 99.99th=[ 1200] 00:26:03.592 bw ( KiB/s): min=11776, max=79872, per=5.02%, avg=41218.80, stdev=23748.79, samples=20 00:26:03.592 iops : min= 46, max= 312, avg=160.90, stdev=92.77, samples=20 00:26:03.592 lat (msec) : 50=0.84%, 100=4.30%, 250=41.54%, 500=23.97%, 750=14.70% 00:26:03.592 lat (msec) : 1000=14.11%, 2000=0.54% 00:26:03.592 cpu : usr=0.06%, sys=0.65%, ctx=274, majf=0, minf=4097 00:26:03.592 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:03.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.592 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.592 issued rwts: total=1673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.592 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.592 job1: (groupid=0, jobs=1): err= 0: pid=791532: Sun Nov 17 18:46:48 2024 00:26:03.592 read: IOPS=179, BW=44.8MiB/s (46.9MB/s)(452MiB/10087msec) 00:26:03.592 slat (usec): min=8, max=436949, avg=3393.96, stdev=22728.30 00:26:03.592 clat (usec): min=1828, max=1734.2k, avg=353809.87, stdev=345516.03 00:26:03.592 lat (usec): min=1852, max=1734.2k, avg=357203.83, stdev=348935.27 00:26:03.592 clat percentiles (msec): 00:26:03.592 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 29], 20.00th=[ 80], 00:26:03.592 | 30.00th=[ 144], 40.00th=[ 209], 50.00th=[ 247], 60.00th=[ 326], 00:26:03.592 | 70.00th=[ 405], 80.00th=[ 567], 90.00th=[ 735], 95.00th=[ 1284], 00:26:03.592 | 99.00th=[ 1519], 99.50th=[ 1569], 99.90th=[ 1737], 99.95th=[ 1737], 00:26:03.592 | 99.99th=[ 1737] 00:26:03.592 bw ( KiB/s): min= 6144, max=114404, per=5.44%, avg=44623.50, stdev=25894.32, samples=20 00:26:03.592 iops : min= 24, max= 446, avg=174.20, stdev=101.04, samples=20 00:26:03.592 lat (msec) : 2=0.11%, 4=1.94%, 10=3.88%, 50=7.48%, 100=10.69% 00:26:03.592 lat (msec) : 250=27.13%, 500=25.19%, 750=13.68%, 1000=3.16%, 2000=6.76% 00:26:03.592 cpu : usr=0.07%, sys=0.57%, ctx=420, majf=0, minf=4097 00:26:03.592 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:26:03.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.592 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.592 issued rwts: total=1806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.592 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.592 job2: (groupid=0, jobs=1): err= 0: pid=791533: Sun Nov 17 18:46:48 2024 00:26:03.592 read: IOPS=307, BW=77.0MiB/s (80.7MB/s)(776MiB/10083msec) 00:26:03.592 slat (usec): min=9, max=475428, avg=2179.99, stdev=17318.81 00:26:03.592 clat (usec): min=1364, max=1214.8k, avg=205568.21, stdev=263353.30 00:26:03.592 lat (usec): min=1418, max=1272.9k, avg=207748.20, stdev=265320.68 00:26:03.592 clat percentiles (msec): 00:26:03.592 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 18], 20.00th=[ 42], 00:26:03.592 | 30.00th=[ 61], 40.00th=[ 78], 50.00th=[ 99], 60.00th=[ 121], 00:26:03.592 | 70.00th=[ 157], 80.00th=[ 376], 90.00th=[ 634], 95.00th=[ 835], 00:26:03.592 | 99.00th=[ 1083], 99.50th=[ 1133], 99.90th=[ 1217], 99.95th=[ 1217], 00:26:03.592 | 99.99th=[ 1217] 00:26:03.592 bw ( KiB/s): min= 5632, max=215040, per=9.48%, avg=77835.40, stdev=62027.85, samples=20 00:26:03.592 iops : min= 22, max= 840, avg=304.00, stdev=242.33, samples=20 00:26:03.592 lat (msec) : 2=0.13%, 4=5.06%, 10=4.03%, 20=1.39%, 50=12.82% 00:26:03.592 lat (msec) : 100=27.77%, 250=27.67%, 500=5.80%, 750=6.96%, 1000=6.83% 00:26:03.592 lat (msec) : 2000=1.55% 00:26:03.592 cpu : usr=0.23%, sys=1.21%, ctx=986, majf=0, minf=3721 00:26:03.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:03.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.592 issued rwts: total=3104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.592 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.592 job3: (groupid=0, jobs=1): err= 0: pid=791534: Sun Nov 17 18:46:48 2024 00:26:03.592 read: IOPS=297, BW=74.3MiB/s (77.9MB/s)(754MiB/10144msec) 00:26:03.592 slat (usec): min=13, max=380966, avg=3194.93, stdev=13641.24 00:26:03.592 clat (msec): min=15, max=996, avg=212.03, stdev=141.38 00:26:03.592 lat (msec): min=15, max=996, avg=215.22, stdev=143.39 00:26:03.592 clat percentiles (msec): 00:26:03.592 | 1.00th=[ 32], 5.00th=[ 51], 10.00th=[ 75], 20.00th=[ 106], 00:26:03.592 | 30.00th=[ 118], 40.00th=[ 142], 50.00th=[ 165], 60.00th=[ 220], 00:26:03.592 | 70.00th=[ 262], 80.00th=[ 296], 90.00th=[ 409], 95.00th=[ 485], 00:26:03.592 | 99.00th=[ 659], 99.50th=[ 818], 99.90th=[ 835], 99.95th=[ 995], 00:26:03.592 | 99.99th=[ 995] 00:26:03.592 bw ( KiB/s): min=11264, max=211968, per=9.20%, avg=75530.35, stdev=49791.87, samples=20 00:26:03.592 iops : min= 44, max= 828, avg=295.00, stdev=194.51, samples=20 00:26:03.592 lat (msec) : 20=0.23%, 50=4.41%, 100=12.14%, 250=50.70%, 500=28.07% 00:26:03.592 lat (msec) : 750=3.78%, 1000=0.66% 00:26:03.592 cpu : usr=0.18%, sys=1.05%, ctx=456, majf=0, minf=4097 00:26:03.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:03.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.593 issued rwts: total=3014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.593 job4: (groupid=0, jobs=1): err= 0: pid=791535: Sun Nov 17 18:46:48 2024 00:26:03.593 read: IOPS=193, BW=48.3MiB/s (50.6MB/s)(487MiB/10092msec) 00:26:03.593 slat (usec): min=13, max=588514, avg=5062.93, stdev=28844.82 00:26:03.593 clat (msec): min=16, max=1443, avg=326.11, stdev=291.33 00:26:03.593 lat (msec): min=16, max=1443, avg=331.18, stdev=295.26 00:26:03.593 clat percentiles (msec): 00:26:03.593 | 1.00th=[ 69], 5.00th=[ 84], 10.00th=[ 89], 20.00th=[ 123], 00:26:03.593 | 30.00th=[ 150], 40.00th=[ 176], 50.00th=[ 199], 60.00th=[ 239], 00:26:03.593 | 70.00th=[ 355], 80.00th=[ 506], 90.00th=[ 852], 95.00th=[ 1011], 00:26:03.593 | 99.00th=[ 1200], 99.50th=[ 1267], 99.90th=[ 1452], 99.95th=[ 1452], 00:26:03.593 | 99.99th=[ 1452] 00:26:03.593 bw ( KiB/s): min= 2048, max=147456, per=5.88%, avg=48259.50, stdev=39743.91, samples=20 00:26:03.593 iops : min= 8, max= 576, avg=188.40, stdev=155.26, samples=20 00:26:03.593 lat (msec) : 20=0.41%, 100=13.13%, 250=47.77%, 500=17.91%, 750=9.03% 00:26:03.593 lat (msec) : 1000=6.11%, 2000=5.64% 00:26:03.593 cpu : usr=0.15%, sys=0.72%, ctx=247, majf=0, minf=4097 00:26:03.593 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:03.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.593 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.593 issued rwts: total=1949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.593 job5: (groupid=0, jobs=1): err= 0: pid=791536: Sun Nov 17 18:46:48 2024 00:26:03.593 read: IOPS=322, BW=80.7MiB/s (84.6MB/s)(819MiB/10143msec) 00:26:03.593 slat (usec): min=12, max=247519, avg=2791.18, stdev=11916.93 00:26:03.593 clat (msec): min=19, max=886, avg=195.33, stdev=144.64 00:26:03.593 lat (msec): min=19, max=886, avg=198.12, stdev=146.50 00:26:03.593 clat percentiles (msec): 00:26:03.593 | 1.00th=[ 33], 5.00th=[ 42], 10.00th=[ 63], 20.00th=[ 87], 00:26:03.593 | 30.00th=[ 99], 40.00th=[ 114], 50.00th=[ 144], 60.00th=[ 190], 00:26:03.593 | 70.00th=[ 241], 80.00th=[ 288], 90.00th=[ 401], 95.00th=[ 518], 00:26:03.593 | 99.00th=[ 676], 99.50th=[ 676], 99.90th=[ 743], 99.95th=[ 810], 00:26:03.593 | 99.99th=[ 885] 00:26:03.593 bw ( KiB/s): min=12825, max=176128, per=10.01%, avg=82186.95, stdev=52905.39, samples=20 00:26:03.593 iops : min= 50, max= 688, avg=321.00, stdev=206.68, samples=20 00:26:03.593 lat (msec) : 20=0.09%, 50=7.88%, 100=23.52%, 250=40.81%, 500=22.33% 00:26:03.593 lat (msec) : 750=5.28%, 1000=0.09% 00:26:03.593 cpu : usr=0.17%, sys=1.25%, ctx=505, majf=0, minf=4097 00:26:03.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:03.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.593 issued rwts: total=3274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.593 job6: (groupid=0, jobs=1): err= 0: pid=791537: Sun Nov 17 18:46:48 2024 00:26:03.593 read: IOPS=606, BW=152MiB/s (159MB/s)(1539MiB/10146msec) 00:26:03.593 slat (usec): min=12, max=368130, avg=1537.22, stdev=8515.27 00:26:03.593 clat (usec): min=514, max=925535, avg=103846.18, stdev=126425.04 00:26:03.593 lat (usec): min=531, max=925600, avg=105383.40, stdev=128240.54 00:26:03.593 clat percentiles (usec): 00:26:03.593 | 1.00th=[ 775], 5.00th=[ 1106], 10.00th=[ 30016], 20.00th=[ 34341], 00:26:03.593 | 30.00th=[ 37487], 40.00th=[ 40109], 50.00th=[ 43254], 60.00th=[ 50594], 00:26:03.593 | 70.00th=[ 76022], 80.00th=[193987], 90.00th=[270533], 95.00th=[379585], 00:26:03.593 | 99.00th=[583009], 99.50th=[675283], 99.90th=[876610], 99.95th=[876610], 00:26:03.593 | 99.99th=[926942] 00:26:03.593 bw ( KiB/s): min=31232, max=479232, per=19.00%, avg=155942.05, stdev=148829.43, samples=20 00:26:03.593 iops : min= 122, max= 1872, avg=609.10, stdev=581.36, samples=20 00:26:03.593 lat (usec) : 750=0.50%, 1000=3.57% 00:26:03.593 lat (msec) : 2=2.39%, 4=0.05%, 10=0.15%, 20=0.29%, 50=52.59% 00:26:03.593 lat (msec) : 100=13.46%, 250=14.93%, 500=10.46%, 750=1.35%, 1000=0.26% 00:26:03.593 cpu : usr=0.32%, sys=1.97%, ctx=1436, majf=0, minf=4097 00:26:03.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:03.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.593 issued rwts: total=6157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.593 job7: (groupid=0, jobs=1): err= 0: pid=791538: Sun Nov 17 18:46:48 2024 00:26:03.593 read: IOPS=202, BW=50.7MiB/s (53.2MB/s)(511MiB/10082msec) 00:26:03.593 slat (usec): min=8, max=445007, avg=3793.39, stdev=19928.36 00:26:03.593 clat (msec): min=19, max=1194, avg=311.53, stdev=242.58 00:26:03.593 lat (msec): min=19, max=1194, avg=315.32, stdev=246.37 00:26:03.593 clat percentiles (msec): 00:26:03.593 | 1.00th=[ 30], 5.00th=[ 68], 10.00th=[ 89], 20.00th=[ 132], 00:26:03.593 | 30.00th=[ 153], 40.00th=[ 163], 50.00th=[ 188], 60.00th=[ 284], 00:26:03.593 | 70.00th=[ 401], 80.00th=[ 514], 90.00th=[ 642], 95.00th=[ 869], 00:26:03.593 | 99.00th=[ 986], 99.50th=[ 1036], 99.90th=[ 1116], 99.95th=[ 1200], 00:26:03.593 | 99.99th=[ 1200] 00:26:03.593 bw ( KiB/s): min=14848, max=110080, per=6.18%, avg=50701.80, stdev=33037.60, samples=20 00:26:03.593 iops : min= 58, max= 430, avg=198.00, stdev=129.07, samples=20 00:26:03.593 lat (msec) : 20=0.68%, 50=3.67%, 100=8.17%, 250=43.47%, 500=22.69% 00:26:03.593 lat (msec) : 750=13.11%, 1000=7.48%, 2000=0.73% 00:26:03.593 cpu : usr=0.05%, sys=0.65%, ctx=454, majf=0, minf=4097 00:26:03.593 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:03.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.593 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.593 issued rwts: total=2045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.593 job8: (groupid=0, jobs=1): err= 0: pid=791539: Sun Nov 17 18:46:48 2024 00:26:03.593 read: IOPS=168, BW=42.2MiB/s (44.3MB/s)(428MiB/10134msec) 00:26:03.593 slat (usec): min=11, max=407623, avg=4543.13, stdev=23683.82 00:26:03.593 clat (msec): min=92, max=1198, avg=374.24, stdev=251.62 00:26:03.593 lat (msec): min=92, max=1233, avg=378.78, stdev=255.54 00:26:03.593 clat percentiles (msec): 00:26:03.593 | 1.00th=[ 128], 5.00th=[ 144], 10.00th=[ 155], 20.00th=[ 167], 00:26:03.593 | 30.00th=[ 180], 40.00th=[ 203], 50.00th=[ 300], 60.00th=[ 342], 00:26:03.593 | 70.00th=[ 393], 80.00th=[ 609], 90.00th=[ 802], 95.00th=[ 877], 00:26:03.593 | 99.00th=[ 1053], 99.50th=[ 1167], 99.90th=[ 1200], 99.95th=[ 1200], 00:26:03.593 | 99.99th=[ 1200] 00:26:03.593 bw ( KiB/s): min=13312, max=98304, per=5.14%, avg=42189.45, stdev=29411.31, samples=20 00:26:03.593 iops : min= 52, max= 384, avg=164.75, stdev=114.94, samples=20 00:26:03.593 lat (msec) : 100=0.29%, 250=43.25%, 500=30.27%, 750=11.75%, 1000=12.74% 00:26:03.593 lat (msec) : 2000=1.69% 00:26:03.593 cpu : usr=0.09%, sys=0.71%, ctx=301, majf=0, minf=4097 00:26:03.593 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:26:03.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.593 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.593 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.593 job9: (groupid=0, jobs=1): err= 0: pid=791546: Sun Nov 17 18:46:48 2024 00:26:03.593 read: IOPS=580, BW=145MiB/s (152MB/s)(1454MiB/10025msec) 00:26:03.593 slat (usec): min=11, max=80493, avg=1640.50, stdev=5233.91 00:26:03.593 clat (usec): min=1480, max=421945, avg=108621.12, stdev=67954.86 00:26:03.593 lat (usec): min=1730, max=423540, avg=110261.62, stdev=68947.38 00:26:03.593 clat percentiles (msec): 00:26:03.593 | 1.00th=[ 8], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 55], 00:26:03.593 | 30.00th=[ 67], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 111], 00:26:03.593 | 70.00th=[ 136], 80.00th=[ 165], 90.00th=[ 192], 95.00th=[ 222], 00:26:03.593 | 99.00th=[ 359], 99.50th=[ 372], 99.90th=[ 414], 99.95th=[ 422], 00:26:03.593 | 99.99th=[ 422] 00:26:03.593 bw ( KiB/s): min=41984, max=248320, per=17.94%, avg=147222.55, stdev=63916.49, samples=20 00:26:03.593 iops : min= 164, max= 970, avg=575.05, stdev=249.67, samples=20 00:26:03.593 lat (msec) : 2=0.14%, 4=0.36%, 10=0.86%, 20=0.93%, 50=10.15% 00:26:03.593 lat (msec) : 100=45.40%, 250=38.21%, 500=3.96% 00:26:03.593 cpu : usr=0.42%, sys=2.03%, ctx=1193, majf=0, minf=4097 00:26:03.593 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:03.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.593 issued rwts: total=5815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.593 job10: (groupid=0, jobs=1): err= 0: pid=791547: Sun Nov 17 18:46:48 2024 00:26:03.593 read: IOPS=195, BW=48.9MiB/s (51.3MB/s)(494MiB/10096msec) 00:26:03.593 slat (usec): min=12, max=280622, avg=4923.37, stdev=19140.82 00:26:03.593 clat (msec): min=11, max=1106, avg=321.86, stdev=252.21 00:26:03.593 lat (msec): min=11, max=1112, avg=326.78, stdev=256.32 00:26:03.593 clat percentiles (msec): 00:26:03.593 | 1.00th=[ 17], 5.00th=[ 79], 10.00th=[ 90], 20.00th=[ 121], 00:26:03.593 | 30.00th=[ 146], 40.00th=[ 188], 50.00th=[ 222], 60.00th=[ 288], 00:26:03.593 | 70.00th=[ 393], 80.00th=[ 523], 90.00th=[ 760], 95.00th=[ 869], 00:26:03.593 | 99.00th=[ 986], 99.50th=[ 1011], 99.90th=[ 1053], 99.95th=[ 1099], 00:26:03.593 | 99.99th=[ 1099] 00:26:03.593 bw ( KiB/s): min=12288, max=141824, per=5.96%, avg=48942.60, stdev=39244.42, samples=20 00:26:03.593 iops : min= 48, max= 554, avg=191.10, stdev=153.36, samples=20 00:26:03.593 lat (msec) : 20=1.27%, 50=1.87%, 100=12.30%, 250=40.94%, 500=22.93% 00:26:03.593 lat (msec) : 750=9.97%, 1000=9.97%, 2000=0.76% 00:26:03.593 cpu : usr=0.12%, sys=0.78%, ctx=284, majf=0, minf=4097 00:26:03.593 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:03.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.593 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.593 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.593 00:26:03.593 Run status group 0 (all jobs): 00:26:03.593 READ: bw=801MiB/s (840MB/s), 41.2MiB/s-152MiB/s (43.3MB/s-159MB/s), io=8131MiB (8526MB), run=10025-10146msec 00:26:03.593 00:26:03.593 Disk stats (read/write): 00:26:03.593 nvme0n1: ios=3212/0, merge=0/0, ticks=1218913/0, in_queue=1218913, util=97.36% 00:26:03.593 nvme10n1: ios=3477/0, merge=0/0, ticks=1246444/0, in_queue=1246444, util=97.57% 00:26:03.593 nvme1n1: ios=6062/0, merge=0/0, ticks=1235353/0, in_queue=1235353, util=97.83% 00:26:03.593 nvme2n1: ios=5887/0, merge=0/0, ticks=1212598/0, in_queue=1212598, util=97.95% 00:26:03.593 nvme3n1: ios=3771/0, merge=0/0, ticks=1225693/0, in_queue=1225693, util=98.02% 00:26:03.593 nvme4n1: ios=6420/0, merge=0/0, ticks=1217649/0, in_queue=1217649, util=98.33% 00:26:03.593 nvme5n1: ios=12187/0, merge=0/0, ticks=1220017/0, in_queue=1220017, util=98.49% 00:26:03.593 nvme6n1: ios=3907/0, merge=0/0, ticks=1235289/0, in_queue=1235289, util=98.58% 00:26:03.593 nvme7n1: ios=3295/0, merge=0/0, ticks=1225383/0, in_queue=1225383, util=98.94% 00:26:03.593 nvme8n1: ios=11375/0, merge=0/0, ticks=1238857/0, in_queue=1238857, util=99.14% 00:26:03.593 nvme9n1: ios=3815/0, merge=0/0, ticks=1229210/0, in_queue=1229210, util=99.26% 00:26:03.593 18:46:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:03.593 [global] 00:26:03.593 thread=1 00:26:03.593 invalidate=1 00:26:03.593 rw=randwrite 00:26:03.593 time_based=1 00:26:03.593 runtime=10 00:26:03.593 ioengine=libaio 00:26:03.593 direct=1 00:26:03.593 bs=262144 00:26:03.593 iodepth=64 00:26:03.593 norandommap=1 00:26:03.593 numjobs=1 00:26:03.593 00:26:03.593 [job0] 00:26:03.593 filename=/dev/nvme0n1 00:26:03.593 [job1] 00:26:03.593 filename=/dev/nvme10n1 00:26:03.593 [job2] 00:26:03.593 filename=/dev/nvme1n1 00:26:03.593 [job3] 00:26:03.593 filename=/dev/nvme2n1 00:26:03.593 [job4] 00:26:03.593 filename=/dev/nvme3n1 00:26:03.593 [job5] 00:26:03.593 filename=/dev/nvme4n1 00:26:03.593 [job6] 00:26:03.593 filename=/dev/nvme5n1 00:26:03.593 [job7] 00:26:03.593 filename=/dev/nvme6n1 00:26:03.593 [job8] 00:26:03.593 filename=/dev/nvme7n1 00:26:03.593 [job9] 00:26:03.593 filename=/dev/nvme8n1 00:26:03.593 [job10] 00:26:03.593 filename=/dev/nvme9n1 00:26:03.593 Could not set queue depth (nvme0n1) 00:26:03.593 Could not set queue depth (nvme10n1) 00:26:03.593 Could not set queue depth (nvme1n1) 00:26:03.593 Could not set queue depth (nvme2n1) 00:26:03.593 Could not set queue depth (nvme3n1) 00:26:03.593 Could not set queue depth (nvme4n1) 00:26:03.593 Could not set queue depth (nvme5n1) 00:26:03.593 Could not set queue depth (nvme6n1) 00:26:03.593 Could not set queue depth (nvme7n1) 00:26:03.593 Could not set queue depth (nvme8n1) 00:26:03.593 Could not set queue depth (nvme9n1) 00:26:03.593 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.593 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.593 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.593 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.593 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.593 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.593 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.593 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.593 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.593 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.593 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.593 fio-3.35 00:26:03.593 Starting 11 threads 00:26:13.567 00:26:13.567 job0: (groupid=0, jobs=1): err= 0: pid=792270: Sun Nov 17 18:46:59 2024 00:26:13.567 write: IOPS=333, BW=83.5MiB/s (87.5MB/s)(844MiB/10112msec); 0 zone resets 00:26:13.567 slat (usec): min=20, max=158950, avg=2596.57, stdev=6789.01 00:26:13.567 clat (msec): min=5, max=659, avg=189.02, stdev=123.27 00:26:13.567 lat (msec): min=5, max=659, avg=191.62, stdev=124.88 00:26:13.567 clat percentiles (msec): 00:26:13.567 | 1.00th=[ 21], 5.00th=[ 78], 10.00th=[ 88], 20.00th=[ 114], 00:26:13.567 | 30.00th=[ 125], 40.00th=[ 132], 50.00th=[ 142], 60.00th=[ 161], 00:26:13.567 | 70.00th=[ 182], 80.00th=[ 268], 90.00th=[ 418], 95.00th=[ 464], 00:26:13.567 | 99.00th=[ 575], 99.50th=[ 600], 99.90th=[ 642], 99.95th=[ 659], 00:26:13.567 | 99.99th=[ 659] 00:26:13.567 bw ( KiB/s): min=26624, max=179712, per=8.12%, avg=84821.30, stdev=46115.75, samples=20 00:26:13.567 iops : min= 104, max= 702, avg=331.30, stdev=180.14, samples=20 00:26:13.567 lat (msec) : 10=0.21%, 20=0.80%, 50=2.31%, 100=13.18%, 250=62.65% 00:26:13.567 lat (msec) : 500=18.60%, 750=2.25% 00:26:13.567 cpu : usr=1.10%, sys=1.09%, ctx=1245, majf=0, minf=2 00:26:13.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:26:13.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.567 issued rwts: total=0,3376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.567 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.567 job1: (groupid=0, jobs=1): err= 0: pid=792282: Sun Nov 17 18:46:59 2024 00:26:13.567 write: IOPS=268, BW=67.2MiB/s (70.4MB/s)(685MiB/10193msec); 0 zone resets 00:26:13.567 slat (usec): min=25, max=90429, avg=2437.36, stdev=7351.41 00:26:13.567 clat (msec): min=4, max=610, avg=235.52, stdev=136.58 00:26:13.567 lat (msec): min=4, max=610, avg=237.96, stdev=138.32 00:26:13.567 clat percentiles (msec): 00:26:13.567 | 1.00th=[ 16], 5.00th=[ 59], 10.00th=[ 75], 20.00th=[ 114], 00:26:13.567 | 30.00th=[ 144], 40.00th=[ 167], 50.00th=[ 199], 60.00th=[ 241], 00:26:13.567 | 70.00th=[ 321], 80.00th=[ 384], 90.00th=[ 435], 95.00th=[ 472], 00:26:13.567 | 99.00th=[ 535], 99.50th=[ 567], 99.90th=[ 600], 99.95th=[ 609], 00:26:13.567 | 99.99th=[ 609] 00:26:13.567 bw ( KiB/s): min=30720, max=155136, per=6.55%, avg=68488.50, stdev=32220.54, samples=20 00:26:13.567 iops : min= 120, max= 606, avg=267.50, stdev=125.84, samples=20 00:26:13.567 lat (msec) : 10=0.04%, 20=2.08%, 50=1.68%, 100=12.08%, 250=45.60% 00:26:13.567 lat (msec) : 500=35.41%, 750=3.10% 00:26:13.567 cpu : usr=0.90%, sys=1.08%, ctx=1535, majf=0, minf=1 00:26:13.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:13.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.567 issued rwts: total=0,2739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.567 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.567 job2: (groupid=0, jobs=1): err= 0: pid=792283: Sun Nov 17 18:46:59 2024 00:26:13.567 write: IOPS=269, BW=67.3MiB/s (70.6MB/s)(686MiB/10193msec); 0 zone resets 00:26:13.567 slat (usec): min=26, max=133978, avg=3284.21, stdev=8455.98 00:26:13.567 clat (msec): min=4, max=555, avg=234.15, stdev=146.80 00:26:13.567 lat (msec): min=4, max=555, avg=237.43, stdev=148.97 00:26:13.567 clat percentiles (msec): 00:26:13.567 | 1.00th=[ 20], 5.00th=[ 49], 10.00th=[ 74], 20.00th=[ 114], 00:26:13.567 | 30.00th=[ 131], 40.00th=[ 140], 50.00th=[ 182], 60.00th=[ 239], 00:26:13.567 | 70.00th=[ 326], 80.00th=[ 401], 90.00th=[ 464], 95.00th=[ 502], 00:26:13.567 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 558], 99.95th=[ 558], 00:26:13.567 | 99.99th=[ 558] 00:26:13.567 bw ( KiB/s): min=28672, max=181760, per=6.57%, avg=68641.75, stdev=44101.65, samples=20 00:26:13.567 iops : min= 112, max= 710, avg=268.10, stdev=172.26, samples=20 00:26:13.567 lat (msec) : 10=0.44%, 20=0.80%, 50=4.48%, 100=10.46%, 250=45.39% 00:26:13.567 lat (msec) : 500=33.37%, 750=5.06% 00:26:13.567 cpu : usr=0.89%, sys=0.88%, ctx=1068, majf=0, minf=1 00:26:13.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:13.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.567 issued rwts: total=0,2745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.567 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.567 job3: (groupid=0, jobs=1): err= 0: pid=792284: Sun Nov 17 18:46:59 2024 00:26:13.567 write: IOPS=681, BW=170MiB/s (179MB/s)(1715MiB/10063msec); 0 zone resets 00:26:13.567 slat (usec): min=18, max=71561, avg=1274.83, stdev=3631.07 00:26:13.567 clat (msec): min=2, max=605, avg=92.58, stdev=84.97 00:26:13.567 lat (msec): min=2, max=605, avg=93.86, stdev=85.90 00:26:13.567 clat percentiles (msec): 00:26:13.567 | 1.00th=[ 30], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 45], 00:26:13.567 | 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 52], 60.00th=[ 85], 00:26:13.567 | 70.00th=[ 97], 80.00th=[ 112], 90.00th=[ 194], 95.00th=[ 279], 00:26:13.567 | 99.00th=[ 456], 99.50th=[ 523], 99.90th=[ 600], 99.95th=[ 600], 00:26:13.567 | 99.99th=[ 609] 00:26:13.567 bw ( KiB/s): min=34304, max=373248, per=16.65%, avg=173977.60, stdev=107352.87, samples=20 00:26:13.567 iops : min= 134, max= 1458, avg=679.60, stdev=419.35, samples=20 00:26:13.567 lat (msec) : 4=0.03%, 10=0.28%, 20=0.32%, 50=48.46%, 100=24.76% 00:26:13.567 lat (msec) : 250=20.06%, 500=5.44%, 750=0.66% 00:26:13.567 cpu : usr=2.08%, sys=2.24%, ctx=2198, majf=0, minf=1 00:26:13.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:13.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.567 issued rwts: total=0,6859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.567 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.567 job4: (groupid=0, jobs=1): err= 0: pid=792285: Sun Nov 17 18:46:59 2024 00:26:13.567 write: IOPS=309, BW=77.5MiB/s (81.3MB/s)(790MiB/10192msec); 0 zone resets 00:26:13.567 slat (usec): min=21, max=122978, avg=2255.42, stdev=6706.11 00:26:13.567 clat (usec): min=1641, max=588815, avg=204106.74, stdev=126248.87 00:26:13.567 lat (usec): min=1679, max=591968, avg=206362.16, stdev=127618.04 00:26:13.567 clat percentiles (msec): 00:26:13.567 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 47], 20.00th=[ 85], 00:26:13.567 | 30.00th=[ 121], 40.00th=[ 153], 50.00th=[ 180], 60.00th=[ 232], 00:26:13.567 | 70.00th=[ 279], 80.00th=[ 317], 90.00th=[ 368], 95.00th=[ 422], 00:26:13.567 | 99.00th=[ 535], 99.50th=[ 550], 99.90th=[ 584], 99.95th=[ 584], 00:26:13.567 | 99.99th=[ 592] 00:26:13.567 bw ( KiB/s): min=30720, max=128000, per=7.58%, avg=79232.00, stdev=28597.26, samples=20 00:26:13.567 iops : min= 120, max= 500, avg=309.50, stdev=111.71, samples=20 00:26:13.567 lat (msec) : 2=0.03%, 4=0.54%, 10=1.49%, 20=0.79%, 50=8.86% 00:26:13.567 lat (msec) : 100=13.45%, 250=37.99%, 500=34.41%, 750=2.44% 00:26:13.567 cpu : usr=0.98%, sys=0.97%, ctx=1699, majf=0, minf=1 00:26:13.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:13.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.568 issued rwts: total=0,3159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.568 job5: (groupid=0, jobs=1): err= 0: pid=792286: Sun Nov 17 18:46:59 2024 00:26:13.568 write: IOPS=486, BW=122MiB/s (127MB/s)(1229MiB/10110msec); 0 zone resets 00:26:13.568 slat (usec): min=22, max=130716, avg=1434.27, stdev=5369.82 00:26:13.568 clat (usec): min=1117, max=597332, avg=130118.47, stdev=122167.14 00:26:13.568 lat (usec): min=1156, max=597386, avg=131552.74, stdev=123671.00 00:26:13.568 clat percentiles (msec): 00:26:13.568 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 39], 20.00th=[ 43], 00:26:13.568 | 30.00th=[ 58], 40.00th=[ 78], 50.00th=[ 94], 60.00th=[ 99], 00:26:13.568 | 70.00th=[ 109], 80.00th=[ 201], 90.00th=[ 338], 95.00th=[ 409], 00:26:13.568 | 99.00th=[ 531], 99.50th=[ 550], 99.90th=[ 584], 99.95th=[ 584], 00:26:13.568 | 99.99th=[ 600] 00:26:13.568 bw ( KiB/s): min=31232, max=378368, per=11.89%, avg=124244.35, stdev=89975.38, samples=20 00:26:13.568 iops : min= 122, max= 1478, avg=485.30, stdev=351.48, samples=20 00:26:13.568 lat (msec) : 2=0.20%, 4=0.28%, 10=0.71%, 20=2.12%, 50=25.00% 00:26:13.568 lat (msec) : 100=34.38%, 250=19.93%, 500=15.60%, 750=1.77% 00:26:13.568 cpu : usr=1.83%, sys=1.68%, ctx=2705, majf=0, minf=1 00:26:13.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:13.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.568 issued rwts: total=0,4916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.568 job6: (groupid=0, jobs=1): err= 0: pid=792287: Sun Nov 17 18:46:59 2024 00:26:13.568 write: IOPS=563, BW=141MiB/s (148MB/s)(1422MiB/10088msec); 0 zone resets 00:26:13.568 slat (usec): min=16, max=192926, avg=729.71, stdev=4001.72 00:26:13.568 clat (usec): min=722, max=701926, avg=112745.14, stdev=97494.21 00:26:13.568 lat (usec): min=747, max=701973, avg=113474.85, stdev=97897.91 00:26:13.568 clat percentiles (usec): 00:26:13.568 | 1.00th=[ 1942], 5.00th=[ 12518], 10.00th=[ 20579], 20.00th=[ 41681], 00:26:13.568 | 30.00th=[ 60556], 40.00th=[ 70779], 50.00th=[ 85459], 60.00th=[103285], 00:26:13.568 | 70.00th=[130548], 80.00th=[160433], 90.00th=[240124], 95.00th=[312476], 00:26:13.568 | 99.00th=[467665], 99.50th=[505414], 99.90th=[675283], 99.95th=[692061], 00:26:13.568 | 99.99th=[700449] 00:26:13.568 bw ( KiB/s): min=81920, max=270336, per=13.78%, avg=143988.70, stdev=43814.48, samples=20 00:26:13.568 iops : min= 320, max= 1056, avg=562.45, stdev=171.15, samples=20 00:26:13.568 lat (usec) : 750=0.02%, 1000=0.32% 00:26:13.568 lat (msec) : 2=0.70%, 4=1.09%, 10=1.42%, 20=6.12%, 50=14.86% 00:26:13.568 lat (msec) : 100=33.34%, 250=33.48%, 500=8.14%, 750=0.51% 00:26:13.568 cpu : usr=1.86%, sys=2.16%, ctx=4162, majf=0, minf=1 00:26:13.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:13.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.568 issued rwts: total=0,5687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.568 job7: (groupid=0, jobs=1): err= 0: pid=792288: Sun Nov 17 18:46:59 2024 00:26:13.568 write: IOPS=242, BW=60.5MiB/s (63.4MB/s)(613MiB/10127msec); 0 zone resets 00:26:13.568 slat (usec): min=21, max=88212, avg=3437.24, stdev=8590.92 00:26:13.568 clat (usec): min=1984, max=603251, avg=260854.43, stdev=154689.68 00:26:13.568 lat (msec): min=2, max=609, avg=264.29, stdev=156.92 00:26:13.568 clat percentiles (msec): 00:26:13.568 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 20], 20.00th=[ 102], 00:26:13.568 | 30.00th=[ 182], 40.00th=[ 247], 50.00th=[ 275], 60.00th=[ 309], 00:26:13.568 | 70.00th=[ 355], 80.00th=[ 401], 90.00th=[ 460], 95.00th=[ 502], 00:26:13.568 | 99.00th=[ 575], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 600], 00:26:13.568 | 99.99th=[ 600] 00:26:13.568 bw ( KiB/s): min=28672, max=235520, per=5.85%, avg=61139.60, stdev=45155.73, samples=20 00:26:13.568 iops : min= 112, max= 920, avg=238.80, stdev=176.39, samples=20 00:26:13.568 lat (msec) : 2=0.04%, 4=0.82%, 10=4.90%, 20=4.53%, 50=6.85% 00:26:13.568 lat (msec) : 100=2.61%, 250=21.22%, 500=54.06%, 750=4.98% 00:26:13.568 cpu : usr=0.71%, sys=1.00%, ctx=1186, majf=0, minf=1 00:26:13.568 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:13.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.568 issued rwts: total=0,2451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.568 job8: (groupid=0, jobs=1): err= 0: pid=792289: Sun Nov 17 18:46:59 2024 00:26:13.568 write: IOPS=353, BW=88.4MiB/s (92.7MB/s)(894MiB/10112msec); 0 zone resets 00:26:13.568 slat (usec): min=19, max=74173, avg=2642.65, stdev=6300.92 00:26:13.568 clat (usec): min=1886, max=531323, avg=178251.42, stdev=125108.39 00:26:13.568 lat (usec): min=1929, max=531413, avg=180894.07, stdev=126819.72 00:26:13.568 clat percentiles (msec): 00:26:13.568 | 1.00th=[ 8], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 92], 00:26:13.568 | 30.00th=[ 118], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 155], 00:26:13.568 | 70.00th=[ 174], 80.00th=[ 266], 90.00th=[ 401], 95.00th=[ 472], 00:26:13.568 | 99.00th=[ 527], 99.50th=[ 527], 99.90th=[ 531], 99.95th=[ 531], 00:26:13.568 | 99.99th=[ 531] 00:26:13.568 bw ( KiB/s): min=30720, max=265216, per=8.61%, avg=89940.20, stdev=56087.64, samples=20 00:26:13.568 iops : min= 120, max= 1036, avg=351.30, stdev=219.10, samples=20 00:26:13.568 lat (msec) : 2=0.03%, 4=0.28%, 10=1.20%, 20=1.01%, 50=7.61% 00:26:13.568 lat (msec) : 100=13.51%, 250=55.43%, 500=18.43%, 750=2.52% 00:26:13.568 cpu : usr=1.11%, sys=1.07%, ctx=1092, majf=0, minf=1 00:26:13.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:26:13.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.568 issued rwts: total=0,3576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.568 job9: (groupid=0, jobs=1): err= 0: pid=792290: Sun Nov 17 18:46:59 2024 00:26:13.568 write: IOPS=308, BW=77.2MiB/s (80.9MB/s)(787MiB/10193msec); 0 zone resets 00:26:13.568 slat (usec): min=15, max=126764, avg=2023.16, stdev=6868.27 00:26:13.568 clat (usec): min=1317, max=568529, avg=205153.91, stdev=149778.82 00:26:13.568 lat (usec): min=1351, max=579305, avg=207177.06, stdev=151434.20 00:26:13.568 clat percentiles (msec): 00:26:13.568 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 26], 20.00th=[ 63], 00:26:13.568 | 30.00th=[ 88], 40.00th=[ 110], 50.00th=[ 171], 60.00th=[ 239], 00:26:13.568 | 70.00th=[ 326], 80.00th=[ 376], 90.00th=[ 418], 95.00th=[ 443], 00:26:13.568 | 99.00th=[ 506], 99.50th=[ 527], 99.90th=[ 558], 99.95th=[ 558], 00:26:13.568 | 99.99th=[ 567] 00:26:13.568 bw ( KiB/s): min=34816, max=248832, per=7.55%, avg=78899.20, stdev=48782.87, samples=20 00:26:13.568 iops : min= 136, max= 972, avg=308.20, stdev=190.56, samples=20 00:26:13.568 lat (msec) : 2=0.13%, 4=1.34%, 10=4.83%, 20=2.54%, 50=5.94% 00:26:13.568 lat (msec) : 100=21.11%, 250=25.21%, 500=37.79%, 750=1.11% 00:26:13.568 cpu : usr=0.88%, sys=1.24%, ctx=1878, majf=0, minf=1 00:26:13.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:13.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.568 issued rwts: total=0,3146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.568 job10: (groupid=0, jobs=1): err= 0: pid=792291: Sun Nov 17 18:46:59 2024 00:26:13.568 write: IOPS=292, BW=73.1MiB/s (76.6MB/s)(740MiB/10124msec); 0 zone resets 00:26:13.568 slat (usec): min=23, max=83860, avg=2559.04, stdev=6692.46 00:26:13.568 clat (msec): min=4, max=581, avg=215.69, stdev=119.29 00:26:13.568 lat (msec): min=4, max=581, avg=218.25, stdev=120.75 00:26:13.568 clat percentiles (msec): 00:26:13.568 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 62], 20.00th=[ 97], 00:26:13.568 | 30.00th=[ 150], 40.00th=[ 176], 50.00th=[ 218], 60.00th=[ 255], 00:26:13.568 | 70.00th=[ 279], 80.00th=[ 305], 90.00th=[ 359], 95.00th=[ 435], 00:26:13.568 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 584], 99.95th=[ 584], 00:26:13.568 | 99.99th=[ 584] 00:26:13.568 bw ( KiB/s): min=30720, max=173056, per=7.10%, avg=74169.65, stdev=36341.77, samples=20 00:26:13.568 iops : min= 120, max= 676, avg=289.70, stdev=141.97, samples=20 00:26:13.568 lat (msec) : 10=1.62%, 20=1.62%, 50=5.30%, 100=12.33%, 250=37.40% 00:26:13.568 lat (msec) : 500=39.29%, 750=2.43% 00:26:13.568 cpu : usr=1.06%, sys=0.95%, ctx=1393, majf=0, minf=1 00:26:13.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:13.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.568 issued rwts: total=0,2960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.568 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.568 00:26:13.569 Run status group 0 (all jobs): 00:26:13.569 WRITE: bw=1021MiB/s (1070MB/s), 60.5MiB/s-170MiB/s (63.4MB/s-179MB/s), io=10.2GiB (10.9GB), run=10063-10193msec 00:26:13.569 00:26:13.569 Disk stats (read/write): 00:26:13.569 nvme0n1: ios=49/6584, merge=0/0, ticks=189/1212943, in_queue=1213132, util=98.96% 00:26:13.569 nvme10n1: ios=41/5472, merge=0/0, ticks=1214/1249592, in_queue=1250806, util=100.00% 00:26:13.569 nvme1n1: ios=48/5484, merge=0/0, ticks=810/1237172, in_queue=1237982, util=100.00% 00:26:13.569 nvme2n1: ios=44/13522, merge=0/0, ticks=704/1215277, in_queue=1215981, util=100.00% 00:26:13.569 nvme3n1: ios=0/6313, merge=0/0, ticks=0/1250160, in_queue=1250160, util=97.97% 00:26:13.569 nvme4n1: ios=0/9603, merge=0/0, ticks=0/1216332, in_queue=1216332, util=98.21% 00:26:13.569 nvme5n1: ios=0/11169, merge=0/0, ticks=0/1229047, in_queue=1229047, util=98.35% 00:26:13.569 nvme6n1: ios=0/4745, merge=0/0, ticks=0/1208122, in_queue=1208122, util=98.45% 00:26:13.569 nvme7n1: ios=0/6985, merge=0/0, ticks=0/1210392, in_queue=1210392, util=98.82% 00:26:13.569 nvme8n1: ios=37/6285, merge=0/0, ticks=564/1250588, in_queue=1251152, util=99.88% 00:26:13.569 nvme9n1: ios=46/5769, merge=0/0, ticks=1288/1205564, in_queue=1206852, util=100.00% 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:13.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:13.569 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.569 18:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:13.569 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.569 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:13.827 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:13.827 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:13.827 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:13.827 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:13.827 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:14.091 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.091 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:14.092 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.092 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:14.092 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.092 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.092 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.092 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.092 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:14.390 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.390 18:47:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:14.673 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:14.673 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.673 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:14.932 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:14.932 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.932 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:15.190 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:15.190 rmmod nvme_tcp 00:26:15.190 rmmod nvme_fabrics 00:26:15.190 rmmod nvme_keyring 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 787288 ']' 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 787288 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 787288 ']' 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 787288 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787288 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787288' 00:26:15.190 killing process with pid 787288 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 787288 00:26:15.190 18:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 787288 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.758 18:47:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:18.296 00:26:18.296 real 1m1.023s 00:26:18.296 user 3m33.066s 00:26:18.296 sys 0m17.475s 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.296 ************************************ 00:26:18.296 END TEST nvmf_multiconnection 00:26:18.296 ************************************ 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:18.296 ************************************ 00:26:18.296 START TEST nvmf_initiator_timeout 00:26:18.296 ************************************ 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:18.296 * Looking for test storage... 00:26:18.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:18.296 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:18.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.297 --rc genhtml_branch_coverage=1 00:26:18.297 --rc genhtml_function_coverage=1 00:26:18.297 --rc genhtml_legend=1 00:26:18.297 --rc geninfo_all_blocks=1 00:26:18.297 --rc geninfo_unexecuted_blocks=1 00:26:18.297 00:26:18.297 ' 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:18.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.297 --rc genhtml_branch_coverage=1 00:26:18.297 --rc genhtml_function_coverage=1 00:26:18.297 --rc genhtml_legend=1 00:26:18.297 --rc geninfo_all_blocks=1 00:26:18.297 --rc geninfo_unexecuted_blocks=1 00:26:18.297 00:26:18.297 ' 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:18.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.297 --rc genhtml_branch_coverage=1 00:26:18.297 --rc genhtml_function_coverage=1 00:26:18.297 --rc genhtml_legend=1 00:26:18.297 --rc geninfo_all_blocks=1 00:26:18.297 --rc geninfo_unexecuted_blocks=1 00:26:18.297 00:26:18.297 ' 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:18.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.297 --rc genhtml_branch_coverage=1 00:26:18.297 --rc genhtml_function_coverage=1 00:26:18.297 --rc genhtml_legend=1 00:26:18.297 --rc geninfo_all_blocks=1 00:26:18.297 --rc geninfo_unexecuted_blocks=1 00:26:18.297 00:26:18.297 ' 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:18.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:18.297 18:47:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:20.203 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:20.203 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:20.203 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:20.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:20.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:20.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:26:20.204 00:26:20.204 --- 10.0.0.2 ping statistics --- 00:26:20.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.204 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:26:20.204 00:26:20.204 --- 10.0.0.1 ping statistics --- 00:26:20.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.204 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=795487 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 795487 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 795487 ']' 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.204 18:47:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.464 [2024-11-17 18:47:06.806062] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:26:20.464 [2024-11-17 18:47:06.806142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.464 [2024-11-17 18:47:06.883573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.464 [2024-11-17 18:47:06.932234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.464 [2024-11-17 18:47:06.932291] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.464 [2024-11-17 18:47:06.932304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.464 [2024-11-17 18:47:06.932316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.464 [2024-11-17 18:47:06.932325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.464 [2024-11-17 18:47:06.933976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.464 [2024-11-17 18:47:06.934065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.464 [2024-11-17 18:47:06.934130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.464 [2024-11-17 18:47:06.934133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.722 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.722 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:20.722 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:20.722 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:20.722 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.722 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.722 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:20.722 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:20.722 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.722 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.722 Malloc0 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.723 Delay0 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.723 [2024-11-17 18:47:07.129526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:20.723 [2024-11-17 18:47:07.157831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.723 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:21.288 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:21.288 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:21.288 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:21.288 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:21.288 18:47:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:23.814 18:47:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:23.814 18:47:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:23.814 18:47:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:23.814 18:47:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:23.814 18:47:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:23.814 18:47:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:23.814 18:47:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=795911 00:26:23.814 18:47:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:23.814 18:47:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:23.814 [global] 00:26:23.814 thread=1 00:26:23.814 invalidate=1 00:26:23.814 rw=write 00:26:23.814 time_based=1 00:26:23.814 runtime=60 00:26:23.814 ioengine=libaio 00:26:23.814 direct=1 00:26:23.814 bs=4096 00:26:23.814 iodepth=1 00:26:23.814 norandommap=0 00:26:23.815 numjobs=1 00:26:23.815 00:26:23.815 verify_dump=1 00:26:23.815 verify_backlog=512 00:26:23.815 verify_state_save=0 00:26:23.815 do_verify=1 00:26:23.815 verify=crc32c-intel 00:26:23.815 [job0] 00:26:23.815 filename=/dev/nvme0n1 00:26:23.815 Could not set queue depth (nvme0n1) 00:26:23.815 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:23.815 fio-3.35 00:26:23.815 Starting 1 thread 00:26:26.351 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:26.351 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.351 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.351 true 00:26:26.351 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.351 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:26.351 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.351 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.351 true 00:26:26.351 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.351 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:26.351 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.352 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.352 true 00:26:26.352 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.352 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:26.352 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.352 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:26.352 true 00:26:26.352 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.352 18:47:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.634 true 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.634 true 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.634 true 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:29.634 true 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:29.634 18:47:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 795911 00:27:25.841 00:27:25.841 job0: (groupid=0, jobs=1): err= 0: pid=795980: Sun Nov 17 18:48:10 2024 00:27:25.841 read: IOPS=125, BW=502KiB/s (514kB/s)(29.4MiB/60013msec) 00:27:25.841 slat (nsec): min=4669, max=67603, avg=13660.08, stdev=8415.14 00:27:25.841 clat (usec): min=203, max=40820k, avg=7730.08, stdev=470367.69 00:27:25.841 lat (usec): min=209, max=40820k, avg=7743.74, stdev=470367.78 00:27:25.841 clat percentiles (usec): 00:27:25.841 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 231], 00:27:25.841 | 20.00th=[ 239], 30.00th=[ 245], 40.00th=[ 249], 00:27:25.841 | 50.00th=[ 258], 60.00th=[ 269], 70.00th=[ 285], 00:27:25.841 | 80.00th=[ 302], 90.00th=[ 375], 95.00th=[ 627], 00:27:25.841 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:27:25.841 | 99.95th=[ 42206], 99.99th=[17112761] 00:27:25.841 write: IOPS=127, BW=512KiB/s (524kB/s)(30.0MiB/60013msec); 0 zone resets 00:27:25.841 slat (usec): min=6, max=25894, avg=15.64, stdev=295.40 00:27:25.841 clat (usec): min=160, max=848, avg=195.59, stdev=25.67 00:27:25.841 lat (usec): min=168, max=26165, avg=211.23, stdev=297.61 00:27:25.841 clat percentiles (usec): 00:27:25.841 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:27:25.841 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:27:25.841 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 237], 00:27:25.841 | 99.00th=[ 302], 99.50th=[ 359], 99.90th=[ 400], 99.95th=[ 408], 00:27:25.841 | 99.99th=[ 848] 00:27:25.841 bw ( KiB/s): min= 4368, max= 9048, per=100.00%, avg=7680.00, stdev=1409.36, samples=8 00:27:25.841 iops : min= 1092, max= 2262, avg=1920.00, stdev=352.34, samples=8 00:27:25.841 lat (usec) : 250=68.96%, 500=28.39%, 750=0.17%, 1000=0.01% 00:27:25.841 lat (msec) : 50=2.46%, >=2000=0.01% 00:27:25.841 cpu : usr=0.17%, sys=0.34%, ctx=15218, majf=0, minf=1 00:27:25.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:25.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.841 issued rwts: total=7533,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:25.842 00:27:25.842 Run status group 0 (all jobs): 00:27:25.842 READ: bw=502KiB/s (514kB/s), 502KiB/s-502KiB/s (514kB/s-514kB/s), io=29.4MiB (30.9MB), run=60013-60013msec 00:27:25.842 WRITE: bw=512KiB/s (524kB/s), 512KiB/s-512KiB/s (524kB/s-524kB/s), io=30.0MiB (31.5MB), run=60013-60013msec 00:27:25.842 00:27:25.842 Disk stats (read/write): 00:27:25.842 nvme0n1: ios=7582/7680, merge=0/0, ticks=18482/1450, in_queue=19932, util=99.81% 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:25.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:25.842 nvmf hotplug test: fio successful as expected 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:25.842 rmmod nvme_tcp 00:27:25.842 rmmod nvme_fabrics 00:27:25.842 rmmod nvme_keyring 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 795487 ']' 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 795487 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 795487 ']' 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 795487 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 795487 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 795487' 00:27:25.842 killing process with pid 795487 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 795487 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 795487 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.842 18:48:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.410 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:26.410 00:27:26.410 real 1m8.368s 00:27:26.410 user 4m10.550s 00:27:26.410 sys 0m7.008s 00:27:26.410 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:26.410 18:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:26.410 ************************************ 00:27:26.410 END TEST nvmf_initiator_timeout 00:27:26.410 ************************************ 00:27:26.410 18:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:26.410 18:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:26.410 18:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:26.410 18:48:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:26.410 18:48:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:28.944 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:28.945 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:28.945 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:28.945 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:28.945 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:28.945 ************************************ 00:27:28.945 START TEST nvmf_perf_adq 00:27:28.945 ************************************ 00:27:28.945 18:48:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:28.945 * Looking for test storage... 00:27:28.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:28.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.945 --rc genhtml_branch_coverage=1 00:27:28.945 --rc genhtml_function_coverage=1 00:27:28.945 --rc genhtml_legend=1 00:27:28.945 --rc geninfo_all_blocks=1 00:27:28.945 --rc geninfo_unexecuted_blocks=1 00:27:28.945 00:27:28.945 ' 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:28.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.945 --rc genhtml_branch_coverage=1 00:27:28.945 --rc genhtml_function_coverage=1 00:27:28.945 --rc genhtml_legend=1 00:27:28.945 --rc geninfo_all_blocks=1 00:27:28.945 --rc geninfo_unexecuted_blocks=1 00:27:28.945 00:27:28.945 ' 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:28.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.945 --rc genhtml_branch_coverage=1 00:27:28.945 --rc genhtml_function_coverage=1 00:27:28.945 --rc genhtml_legend=1 00:27:28.945 --rc geninfo_all_blocks=1 00:27:28.945 --rc geninfo_unexecuted_blocks=1 00:27:28.945 00:27:28.945 ' 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:28.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:28.945 --rc genhtml_branch_coverage=1 00:27:28.945 --rc genhtml_function_coverage=1 00:27:28.945 --rc genhtml_legend=1 00:27:28.945 --rc geninfo_all_blocks=1 00:27:28.945 --rc geninfo_unexecuted_blocks=1 00:27:28.945 00:27:28.945 ' 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.945 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:28.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:28.946 18:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:30.849 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:30.849 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.849 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:30.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:30.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:30.850 18:48:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:31.784 18:48:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:34.315 18:48:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:39.592 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:39.592 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:39.592 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:39.592 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:39.592 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:39.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:27:39.593 00:27:39.593 --- 10.0.0.2 ping statistics --- 00:27:39.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.593 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:27:39.593 00:27:39.593 --- 10.0.0.1 ping statistics --- 00:27:39.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.593 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=808249 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 808249 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 808249 ']' 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.593 [2024-11-17 18:48:25.502627] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:27:39.593 [2024-11-17 18:48:25.502747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.593 [2024-11-17 18:48:25.580050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:39.593 [2024-11-17 18:48:25.629540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.593 [2024-11-17 18:48:25.629597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.593 [2024-11-17 18:48:25.629611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.593 [2024-11-17 18:48:25.629623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.593 [2024-11-17 18:48:25.629633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.593 [2024-11-17 18:48:25.634697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.593 [2024-11-17 18:48:25.634765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.593 [2024-11-17 18:48:25.634815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.593 [2024-11-17 18:48:25.634819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.593 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.594 [2024-11-17 18:48:25.918580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.594 Malloc1 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:39.594 [2024-11-17 18:48:25.990713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=808280 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:39.594 18:48:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:41.493 18:48:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:41.493 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.493 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:41.493 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.493 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:41.493 "tick_rate": 2700000000, 00:27:41.493 "poll_groups": [ 00:27:41.493 { 00:27:41.493 "name": "nvmf_tgt_poll_group_000", 00:27:41.493 "admin_qpairs": 1, 00:27:41.493 "io_qpairs": 1, 00:27:41.493 "current_admin_qpairs": 1, 00:27:41.493 "current_io_qpairs": 1, 00:27:41.493 "pending_bdev_io": 0, 00:27:41.493 "completed_nvme_io": 19632, 00:27:41.493 "transports": [ 00:27:41.493 { 00:27:41.493 "trtype": "TCP" 00:27:41.493 } 00:27:41.493 ] 00:27:41.493 }, 00:27:41.493 { 00:27:41.493 "name": "nvmf_tgt_poll_group_001", 00:27:41.493 "admin_qpairs": 0, 00:27:41.493 "io_qpairs": 1, 00:27:41.493 "current_admin_qpairs": 0, 00:27:41.493 "current_io_qpairs": 1, 00:27:41.493 "pending_bdev_io": 0, 00:27:41.493 "completed_nvme_io": 19530, 00:27:41.493 "transports": [ 00:27:41.493 { 00:27:41.493 "trtype": "TCP" 00:27:41.493 } 00:27:41.493 ] 00:27:41.493 }, 00:27:41.493 { 00:27:41.493 "name": "nvmf_tgt_poll_group_002", 00:27:41.493 "admin_qpairs": 0, 00:27:41.493 "io_qpairs": 1, 00:27:41.493 "current_admin_qpairs": 0, 00:27:41.493 "current_io_qpairs": 1, 00:27:41.493 "pending_bdev_io": 0, 00:27:41.493 "completed_nvme_io": 20196, 00:27:41.493 "transports": [ 00:27:41.493 { 00:27:41.493 "trtype": "TCP" 00:27:41.493 } 00:27:41.493 ] 00:27:41.493 }, 00:27:41.493 { 00:27:41.493 "name": "nvmf_tgt_poll_group_003", 00:27:41.493 "admin_qpairs": 0, 00:27:41.493 "io_qpairs": 1, 00:27:41.493 "current_admin_qpairs": 0, 00:27:41.493 "current_io_qpairs": 1, 00:27:41.493 "pending_bdev_io": 0, 00:27:41.493 "completed_nvme_io": 19519, 00:27:41.493 "transports": [ 00:27:41.493 { 00:27:41.493 "trtype": "TCP" 00:27:41.493 } 00:27:41.493 ] 00:27:41.493 } 00:27:41.493 ] 00:27:41.493 }' 00:27:41.493 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:41.493 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:41.493 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:41.493 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:41.493 18:48:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 808280 00:27:49.690 Initializing NVMe Controllers 00:27:49.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:49.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:49.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:49.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:49.690 Initialization complete. Launching workers. 00:27:49.690 ======================================================== 00:27:49.690 Latency(us) 00:27:49.690 Device Information : IOPS MiB/s Average min max 00:27:49.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10335.90 40.37 6191.86 2311.20 10202.34 00:27:49.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10253.80 40.05 6242.52 2614.35 10246.79 00:27:49.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10584.90 41.35 6047.10 2466.89 10532.91 00:27:49.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10490.40 40.98 6100.61 2315.91 10294.91 00:27:49.690 ======================================================== 00:27:49.690 Total : 41665.00 162.75 6144.58 2311.20 10532.91 00:27:49.690 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.690 rmmod nvme_tcp 00:27:49.690 rmmod nvme_fabrics 00:27:49.690 rmmod nvme_keyring 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 808249 ']' 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 808249 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 808249 ']' 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 808249 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 808249 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 808249' 00:27:49.690 killing process with pid 808249 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 808249 00:27:49.690 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 808249 00:27:49.948 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.948 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:49.948 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:49.948 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:49.948 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:49.948 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:49.948 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:49.948 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.948 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:49.948 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.949 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.949 18:48:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.487 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:52.487 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:52.487 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:52.487 18:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:52.746 18:48:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:56.030 18:48:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:01.309 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:01.309 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:01.309 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:01.310 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:01.310 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:01.310 18:48:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:01.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:28:01.310 00:28:01.310 --- 10.0.0.2 ping statistics --- 00:28:01.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.310 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:28:01.310 00:28:01.310 --- 10.0.0.1 ping statistics --- 00:28:01.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.310 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:01.310 net.core.busy_poll = 1 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:01.310 net.core.busy_read = 1 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=811019 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 811019 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 811019 ']' 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.310 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.310 [2024-11-17 18:48:47.249467] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:01.310 [2024-11-17 18:48:47.249562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.310 [2024-11-17 18:48:47.324506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.310 [2024-11-17 18:48:47.373549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.310 [2024-11-17 18:48:47.373603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.310 [2024-11-17 18:48:47.373631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.310 [2024-11-17 18:48:47.373642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.310 [2024-11-17 18:48:47.373652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.310 [2024-11-17 18:48:47.375163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.310 [2024-11-17 18:48:47.375227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.310 [2024-11-17 18:48:47.375294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.311 [2024-11-17 18:48:47.375297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.311 [2024-11-17 18:48:47.669255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.311 Malloc1 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.311 [2024-11-17 18:48:47.739824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=811050 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:01.311 18:48:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:03.211 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:03.211 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.212 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:03.212 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.212 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:03.212 "tick_rate": 2700000000, 00:28:03.212 "poll_groups": [ 00:28:03.212 { 00:28:03.212 "name": "nvmf_tgt_poll_group_000", 00:28:03.212 "admin_qpairs": 1, 00:28:03.212 "io_qpairs": 3, 00:28:03.212 "current_admin_qpairs": 1, 00:28:03.212 "current_io_qpairs": 3, 00:28:03.212 "pending_bdev_io": 0, 00:28:03.212 "completed_nvme_io": 25743, 00:28:03.212 "transports": [ 00:28:03.212 { 00:28:03.212 "trtype": "TCP" 00:28:03.212 } 00:28:03.212 ] 00:28:03.212 }, 00:28:03.212 { 00:28:03.212 "name": "nvmf_tgt_poll_group_001", 00:28:03.212 "admin_qpairs": 0, 00:28:03.212 "io_qpairs": 1, 00:28:03.212 "current_admin_qpairs": 0, 00:28:03.212 "current_io_qpairs": 1, 00:28:03.212 "pending_bdev_io": 0, 00:28:03.212 "completed_nvme_io": 24937, 00:28:03.212 "transports": [ 00:28:03.212 { 00:28:03.212 "trtype": "TCP" 00:28:03.212 } 00:28:03.212 ] 00:28:03.212 }, 00:28:03.212 { 00:28:03.212 "name": "nvmf_tgt_poll_group_002", 00:28:03.212 "admin_qpairs": 0, 00:28:03.212 "io_qpairs": 0, 00:28:03.212 "current_admin_qpairs": 0, 00:28:03.212 "current_io_qpairs": 0, 00:28:03.212 "pending_bdev_io": 0, 00:28:03.212 "completed_nvme_io": 0, 00:28:03.212 "transports": [ 00:28:03.212 { 00:28:03.212 "trtype": "TCP" 00:28:03.212 } 00:28:03.212 ] 00:28:03.212 }, 00:28:03.212 { 00:28:03.212 "name": "nvmf_tgt_poll_group_003", 00:28:03.212 "admin_qpairs": 0, 00:28:03.212 "io_qpairs": 0, 00:28:03.212 "current_admin_qpairs": 0, 00:28:03.212 "current_io_qpairs": 0, 00:28:03.212 "pending_bdev_io": 0, 00:28:03.212 "completed_nvme_io": 0, 00:28:03.212 "transports": [ 00:28:03.212 { 00:28:03.212 "trtype": "TCP" 00:28:03.212 } 00:28:03.212 ] 00:28:03.212 } 00:28:03.212 ] 00:28:03.212 }' 00:28:03.212 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:03.212 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:03.469 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:03.469 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:03.469 18:48:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 811050 00:28:11.577 Initializing NVMe Controllers 00:28:11.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:11.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:11.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:11.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:11.578 Initialization complete. Launching workers. 00:28:11.578 ======================================================== 00:28:11.578 Latency(us) 00:28:11.578 Device Information : IOPS MiB/s Average min max 00:28:11.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4916.66 19.21 13018.53 2155.13 61227.05 00:28:11.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13618.50 53.20 4699.66 1581.25 46935.70 00:28:11.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4428.07 17.30 14455.12 1971.52 61765.63 00:28:11.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4257.47 16.63 15031.92 2187.43 61620.15 00:28:11.578 ======================================================== 00:28:11.578 Total : 27220.69 106.33 9405.20 1581.25 61765.63 00:28:11.578 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:11.578 rmmod nvme_tcp 00:28:11.578 rmmod nvme_fabrics 00:28:11.578 rmmod nvme_keyring 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 811019 ']' 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 811019 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 811019 ']' 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 811019 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 811019 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 811019' 00:28:11.578 killing process with pid 811019 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 811019 00:28:11.578 18:48:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 811019 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.837 18:48:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:15.125 00:28:15.125 real 0m46.282s 00:28:15.125 user 2m39.642s 00:28:15.125 sys 0m9.720s 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.125 ************************************ 00:28:15.125 END TEST nvmf_perf_adq 00:28:15.125 ************************************ 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:15.125 ************************************ 00:28:15.125 START TEST nvmf_shutdown 00:28:15.125 ************************************ 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:15.125 * Looking for test storage... 00:28:15.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.125 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:15.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.126 --rc genhtml_branch_coverage=1 00:28:15.126 --rc genhtml_function_coverage=1 00:28:15.126 --rc genhtml_legend=1 00:28:15.126 --rc geninfo_all_blocks=1 00:28:15.126 --rc geninfo_unexecuted_blocks=1 00:28:15.126 00:28:15.126 ' 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:15.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.126 --rc genhtml_branch_coverage=1 00:28:15.126 --rc genhtml_function_coverage=1 00:28:15.126 --rc genhtml_legend=1 00:28:15.126 --rc geninfo_all_blocks=1 00:28:15.126 --rc geninfo_unexecuted_blocks=1 00:28:15.126 00:28:15.126 ' 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:15.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.126 --rc genhtml_branch_coverage=1 00:28:15.126 --rc genhtml_function_coverage=1 00:28:15.126 --rc genhtml_legend=1 00:28:15.126 --rc geninfo_all_blocks=1 00:28:15.126 --rc geninfo_unexecuted_blocks=1 00:28:15.126 00:28:15.126 ' 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:15.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.126 --rc genhtml_branch_coverage=1 00:28:15.126 --rc genhtml_function_coverage=1 00:28:15.126 --rc genhtml_legend=1 00:28:15.126 --rc geninfo_all_blocks=1 00:28:15.126 --rc geninfo_unexecuted_blocks=1 00:28:15.126 00:28:15.126 ' 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:15.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.126 ************************************ 00:28:15.126 START TEST nvmf_shutdown_tc1 00:28:15.126 ************************************ 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:15.126 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:15.127 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.127 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.127 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.127 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:15.127 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:15.127 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.127 18:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.030 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.030 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:17.030 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:17.030 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:17.030 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:17.030 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:17.030 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:17.030 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.031 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.031 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.290 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.290 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.290 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:17.290 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.290 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.290 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:17.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:28:17.291 00:28:17.291 --- 10.0.0.2 ping statistics --- 00:28:17.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.291 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:28:17.291 00:28:17.291 --- 10.0.0.1 ping statistics --- 00:28:17.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.291 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=814349 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 814349 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 814349 ']' 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.291 18:49:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.291 [2024-11-17 18:49:03.793870] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:17.291 [2024-11-17 18:49:03.793946] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.550 [2024-11-17 18:49:03.873212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.550 [2024-11-17 18:49:03.922504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.550 [2024-11-17 18:49:03.922555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.550 [2024-11-17 18:49:03.922579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.550 [2024-11-17 18:49:03.922590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.550 [2024-11-17 18:49:03.922600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.550 [2024-11-17 18:49:03.924228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.550 [2024-11-17 18:49:03.924285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.550 [2024-11-17 18:49:03.924363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:17.550 [2024-11-17 18:49:03.924366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.550 [2024-11-17 18:49:04.067993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.550 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.808 Malloc1 00:28:17.808 [2024-11-17 18:49:04.177685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.808 Malloc2 00:28:17.808 Malloc3 00:28:17.808 Malloc4 00:28:17.808 Malloc5 00:28:18.066 Malloc6 00:28:18.066 Malloc7 00:28:18.066 Malloc8 00:28:18.066 Malloc9 00:28:18.066 Malloc10 00:28:18.066 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.066 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:18.066 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:18.066 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=814526 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 814526 /var/tmp/bdevperf.sock 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 814526 ']' 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:18.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.325 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.325 { 00:28:18.325 "params": { 00:28:18.325 "name": "Nvme$subsystem", 00:28:18.325 "trtype": "$TEST_TRANSPORT", 00:28:18.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.326 "adrfam": "ipv4", 00:28:18.326 "trsvcid": "$NVMF_PORT", 00:28:18.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.326 "hdgst": ${hdgst:-false}, 00:28:18.326 "ddgst": ${ddgst:-false} 00:28:18.326 }, 00:28:18.326 "method": "bdev_nvme_attach_controller" 00:28:18.326 } 00:28:18.326 EOF 00:28:18.326 )") 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.326 { 00:28:18.326 "params": { 00:28:18.326 "name": "Nvme$subsystem", 00:28:18.326 "trtype": "$TEST_TRANSPORT", 00:28:18.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.326 "adrfam": "ipv4", 00:28:18.326 "trsvcid": "$NVMF_PORT", 00:28:18.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.326 "hdgst": ${hdgst:-false}, 00:28:18.326 "ddgst": ${ddgst:-false} 00:28:18.326 }, 00:28:18.326 "method": "bdev_nvme_attach_controller" 00:28:18.326 } 00:28:18.326 EOF 00:28:18.326 )") 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.326 { 00:28:18.326 "params": { 00:28:18.326 "name": "Nvme$subsystem", 00:28:18.326 "trtype": "$TEST_TRANSPORT", 00:28:18.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.326 "adrfam": "ipv4", 00:28:18.326 "trsvcid": "$NVMF_PORT", 00:28:18.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.326 "hdgst": ${hdgst:-false}, 00:28:18.326 "ddgst": ${ddgst:-false} 00:28:18.326 }, 00:28:18.326 "method": "bdev_nvme_attach_controller" 00:28:18.326 } 00:28:18.326 EOF 00:28:18.326 )") 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.326 { 00:28:18.326 "params": { 00:28:18.326 "name": "Nvme$subsystem", 00:28:18.326 "trtype": "$TEST_TRANSPORT", 00:28:18.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.326 "adrfam": "ipv4", 00:28:18.326 "trsvcid": "$NVMF_PORT", 00:28:18.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.326 "hdgst": ${hdgst:-false}, 00:28:18.326 "ddgst": ${ddgst:-false} 00:28:18.326 }, 00:28:18.326 "method": "bdev_nvme_attach_controller" 00:28:18.326 } 00:28:18.326 EOF 00:28:18.326 )") 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.326 { 00:28:18.326 "params": { 00:28:18.326 "name": "Nvme$subsystem", 00:28:18.326 "trtype": "$TEST_TRANSPORT", 00:28:18.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.326 "adrfam": "ipv4", 00:28:18.326 "trsvcid": "$NVMF_PORT", 00:28:18.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.326 "hdgst": ${hdgst:-false}, 00:28:18.326 "ddgst": ${ddgst:-false} 00:28:18.326 }, 00:28:18.326 "method": "bdev_nvme_attach_controller" 00:28:18.326 } 00:28:18.326 EOF 00:28:18.326 )") 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.326 { 00:28:18.326 "params": { 00:28:18.326 "name": "Nvme$subsystem", 00:28:18.326 "trtype": "$TEST_TRANSPORT", 00:28:18.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.326 "adrfam": "ipv4", 00:28:18.326 "trsvcid": "$NVMF_PORT", 00:28:18.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.326 "hdgst": ${hdgst:-false}, 00:28:18.326 "ddgst": ${ddgst:-false} 00:28:18.326 }, 00:28:18.326 "method": "bdev_nvme_attach_controller" 00:28:18.326 } 00:28:18.326 EOF 00:28:18.326 )") 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.326 { 00:28:18.326 "params": { 00:28:18.326 "name": "Nvme$subsystem", 00:28:18.326 "trtype": "$TEST_TRANSPORT", 00:28:18.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.326 "adrfam": "ipv4", 00:28:18.326 "trsvcid": "$NVMF_PORT", 00:28:18.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.326 "hdgst": ${hdgst:-false}, 00:28:18.326 "ddgst": ${ddgst:-false} 00:28:18.326 }, 00:28:18.326 "method": "bdev_nvme_attach_controller" 00:28:18.326 } 00:28:18.326 EOF 00:28:18.326 )") 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.326 { 00:28:18.326 "params": { 00:28:18.326 "name": "Nvme$subsystem", 00:28:18.326 "trtype": "$TEST_TRANSPORT", 00:28:18.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.326 "adrfam": "ipv4", 00:28:18.326 "trsvcid": "$NVMF_PORT", 00:28:18.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.326 "hdgst": ${hdgst:-false}, 00:28:18.326 "ddgst": ${ddgst:-false} 00:28:18.326 }, 00:28:18.326 "method": "bdev_nvme_attach_controller" 00:28:18.326 } 00:28:18.326 EOF 00:28:18.326 )") 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.326 { 00:28:18.326 "params": { 00:28:18.326 "name": "Nvme$subsystem", 00:28:18.326 "trtype": "$TEST_TRANSPORT", 00:28:18.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.326 "adrfam": "ipv4", 00:28:18.326 "trsvcid": "$NVMF_PORT", 00:28:18.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.326 "hdgst": ${hdgst:-false}, 00:28:18.326 "ddgst": ${ddgst:-false} 00:28:18.326 }, 00:28:18.326 "method": "bdev_nvme_attach_controller" 00:28:18.326 } 00:28:18.326 EOF 00:28:18.326 )") 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.326 { 00:28:18.326 "params": { 00:28:18.326 "name": "Nvme$subsystem", 00:28:18.326 "trtype": "$TEST_TRANSPORT", 00:28:18.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.326 "adrfam": "ipv4", 00:28:18.326 "trsvcid": "$NVMF_PORT", 00:28:18.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.326 "hdgst": ${hdgst:-false}, 00:28:18.326 "ddgst": ${ddgst:-false} 00:28:18.326 }, 00:28:18.326 "method": "bdev_nvme_attach_controller" 00:28:18.326 } 00:28:18.326 EOF 00:28:18.326 )") 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:18.326 18:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:18.326 "params": { 00:28:18.326 "name": "Nvme1", 00:28:18.326 "trtype": "tcp", 00:28:18.326 "traddr": "10.0.0.2", 00:28:18.326 "adrfam": "ipv4", 00:28:18.326 "trsvcid": "4420", 00:28:18.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:18.326 "hdgst": false, 00:28:18.326 "ddgst": false 00:28:18.326 }, 00:28:18.326 "method": "bdev_nvme_attach_controller" 00:28:18.326 },{ 00:28:18.326 "params": { 00:28:18.326 "name": "Nvme2", 00:28:18.327 "trtype": "tcp", 00:28:18.327 "traddr": "10.0.0.2", 00:28:18.327 "adrfam": "ipv4", 00:28:18.327 "trsvcid": "4420", 00:28:18.327 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:18.327 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:18.327 "hdgst": false, 00:28:18.327 "ddgst": false 00:28:18.327 }, 00:28:18.327 "method": "bdev_nvme_attach_controller" 00:28:18.327 },{ 00:28:18.327 "params": { 00:28:18.327 "name": "Nvme3", 00:28:18.327 "trtype": "tcp", 00:28:18.327 "traddr": "10.0.0.2", 00:28:18.327 "adrfam": "ipv4", 00:28:18.327 "trsvcid": "4420", 00:28:18.327 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:18.327 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:18.327 "hdgst": false, 00:28:18.327 "ddgst": false 00:28:18.327 }, 00:28:18.327 "method": "bdev_nvme_attach_controller" 00:28:18.327 },{ 00:28:18.327 "params": { 00:28:18.327 "name": "Nvme4", 00:28:18.327 "trtype": "tcp", 00:28:18.327 "traddr": "10.0.0.2", 00:28:18.327 "adrfam": "ipv4", 00:28:18.327 "trsvcid": "4420", 00:28:18.327 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:18.327 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:18.327 "hdgst": false, 00:28:18.327 "ddgst": false 00:28:18.327 }, 00:28:18.327 "method": "bdev_nvme_attach_controller" 00:28:18.327 },{ 00:28:18.327 "params": { 00:28:18.327 "name": "Nvme5", 00:28:18.327 "trtype": "tcp", 00:28:18.327 "traddr": "10.0.0.2", 00:28:18.327 "adrfam": "ipv4", 00:28:18.327 "trsvcid": "4420", 00:28:18.327 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:18.327 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:18.327 "hdgst": false, 00:28:18.327 "ddgst": false 00:28:18.327 }, 00:28:18.327 "method": "bdev_nvme_attach_controller" 00:28:18.327 },{ 00:28:18.327 "params": { 00:28:18.327 "name": "Nvme6", 00:28:18.327 "trtype": "tcp", 00:28:18.327 "traddr": "10.0.0.2", 00:28:18.327 "adrfam": "ipv4", 00:28:18.327 "trsvcid": "4420", 00:28:18.327 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:18.327 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:18.327 "hdgst": false, 00:28:18.327 "ddgst": false 00:28:18.327 }, 00:28:18.327 "method": "bdev_nvme_attach_controller" 00:28:18.327 },{ 00:28:18.327 "params": { 00:28:18.327 "name": "Nvme7", 00:28:18.327 "trtype": "tcp", 00:28:18.327 "traddr": "10.0.0.2", 00:28:18.327 "adrfam": "ipv4", 00:28:18.327 "trsvcid": "4420", 00:28:18.327 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:18.327 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:18.327 "hdgst": false, 00:28:18.327 "ddgst": false 00:28:18.327 }, 00:28:18.327 "method": "bdev_nvme_attach_controller" 00:28:18.327 },{ 00:28:18.327 "params": { 00:28:18.327 "name": "Nvme8", 00:28:18.327 "trtype": "tcp", 00:28:18.327 "traddr": "10.0.0.2", 00:28:18.327 "adrfam": "ipv4", 00:28:18.327 "trsvcid": "4420", 00:28:18.327 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:18.327 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:18.327 "hdgst": false, 00:28:18.327 "ddgst": false 00:28:18.327 }, 00:28:18.327 "method": "bdev_nvme_attach_controller" 00:28:18.327 },{ 00:28:18.327 "params": { 00:28:18.327 "name": "Nvme9", 00:28:18.327 "trtype": "tcp", 00:28:18.327 "traddr": "10.0.0.2", 00:28:18.327 "adrfam": "ipv4", 00:28:18.327 "trsvcid": "4420", 00:28:18.327 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:18.327 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:18.327 "hdgst": false, 00:28:18.327 "ddgst": false 00:28:18.327 }, 00:28:18.327 "method": "bdev_nvme_attach_controller" 00:28:18.327 },{ 00:28:18.327 "params": { 00:28:18.327 "name": "Nvme10", 00:28:18.327 "trtype": "tcp", 00:28:18.327 "traddr": "10.0.0.2", 00:28:18.327 "adrfam": "ipv4", 00:28:18.327 "trsvcid": "4420", 00:28:18.327 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:18.327 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:18.327 "hdgst": false, 00:28:18.327 "ddgst": false 00:28:18.327 }, 00:28:18.327 "method": "bdev_nvme_attach_controller" 00:28:18.327 }' 00:28:18.327 [2024-11-17 18:49:04.704349] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:18.327 [2024-11-17 18:49:04.704439] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:18.327 [2024-11-17 18:49:04.778321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.327 [2024-11-17 18:49:04.825296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.226 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.226 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:20.226 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:20.226 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.226 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:20.226 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.226 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 814526 00:28:20.226 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:20.226 18:49:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:21.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 814526 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 814349 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.601 { 00:28:21.601 "params": { 00:28:21.601 "name": "Nvme$subsystem", 00:28:21.601 "trtype": "$TEST_TRANSPORT", 00:28:21.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.601 "adrfam": "ipv4", 00:28:21.601 "trsvcid": "$NVMF_PORT", 00:28:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.601 "hdgst": ${hdgst:-false}, 00:28:21.601 "ddgst": ${ddgst:-false} 00:28:21.601 }, 00:28:21.601 "method": "bdev_nvme_attach_controller" 00:28:21.601 } 00:28:21.601 EOF 00:28:21.601 )") 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.601 { 00:28:21.601 "params": { 00:28:21.601 "name": "Nvme$subsystem", 00:28:21.601 "trtype": "$TEST_TRANSPORT", 00:28:21.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.601 "adrfam": "ipv4", 00:28:21.601 "trsvcid": "$NVMF_PORT", 00:28:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.601 "hdgst": ${hdgst:-false}, 00:28:21.601 "ddgst": ${ddgst:-false} 00:28:21.601 }, 00:28:21.601 "method": "bdev_nvme_attach_controller" 00:28:21.601 } 00:28:21.601 EOF 00:28:21.601 )") 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.601 { 00:28:21.601 "params": { 00:28:21.601 "name": "Nvme$subsystem", 00:28:21.601 "trtype": "$TEST_TRANSPORT", 00:28:21.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.601 "adrfam": "ipv4", 00:28:21.601 "trsvcid": "$NVMF_PORT", 00:28:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.601 "hdgst": ${hdgst:-false}, 00:28:21.601 "ddgst": ${ddgst:-false} 00:28:21.601 }, 00:28:21.601 "method": "bdev_nvme_attach_controller" 00:28:21.601 } 00:28:21.601 EOF 00:28:21.601 )") 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.601 { 00:28:21.601 "params": { 00:28:21.601 "name": "Nvme$subsystem", 00:28:21.601 "trtype": "$TEST_TRANSPORT", 00:28:21.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.601 "adrfam": "ipv4", 00:28:21.601 "trsvcid": "$NVMF_PORT", 00:28:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.601 "hdgst": ${hdgst:-false}, 00:28:21.601 "ddgst": ${ddgst:-false} 00:28:21.601 }, 00:28:21.601 "method": "bdev_nvme_attach_controller" 00:28:21.601 } 00:28:21.601 EOF 00:28:21.601 )") 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.601 { 00:28:21.601 "params": { 00:28:21.601 "name": "Nvme$subsystem", 00:28:21.601 "trtype": "$TEST_TRANSPORT", 00:28:21.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.601 "adrfam": "ipv4", 00:28:21.601 "trsvcid": "$NVMF_PORT", 00:28:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.601 "hdgst": ${hdgst:-false}, 00:28:21.601 "ddgst": ${ddgst:-false} 00:28:21.601 }, 00:28:21.601 "method": "bdev_nvme_attach_controller" 00:28:21.601 } 00:28:21.601 EOF 00:28:21.601 )") 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.601 { 00:28:21.601 "params": { 00:28:21.601 "name": "Nvme$subsystem", 00:28:21.601 "trtype": "$TEST_TRANSPORT", 00:28:21.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.601 "adrfam": "ipv4", 00:28:21.601 "trsvcid": "$NVMF_PORT", 00:28:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.601 "hdgst": ${hdgst:-false}, 00:28:21.601 "ddgst": ${ddgst:-false} 00:28:21.601 }, 00:28:21.601 "method": "bdev_nvme_attach_controller" 00:28:21.601 } 00:28:21.601 EOF 00:28:21.601 )") 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.601 { 00:28:21.601 "params": { 00:28:21.601 "name": "Nvme$subsystem", 00:28:21.601 "trtype": "$TEST_TRANSPORT", 00:28:21.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.601 "adrfam": "ipv4", 00:28:21.601 "trsvcid": "$NVMF_PORT", 00:28:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.601 "hdgst": ${hdgst:-false}, 00:28:21.601 "ddgst": ${ddgst:-false} 00:28:21.601 }, 00:28:21.601 "method": "bdev_nvme_attach_controller" 00:28:21.601 } 00:28:21.601 EOF 00:28:21.601 )") 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.601 { 00:28:21.601 "params": { 00:28:21.601 "name": "Nvme$subsystem", 00:28:21.601 "trtype": "$TEST_TRANSPORT", 00:28:21.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.601 "adrfam": "ipv4", 00:28:21.601 "trsvcid": "$NVMF_PORT", 00:28:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.601 "hdgst": ${hdgst:-false}, 00:28:21.601 "ddgst": ${ddgst:-false} 00:28:21.601 }, 00:28:21.601 "method": "bdev_nvme_attach_controller" 00:28:21.601 } 00:28:21.601 EOF 00:28:21.601 )") 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.601 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.601 { 00:28:21.601 "params": { 00:28:21.601 "name": "Nvme$subsystem", 00:28:21.601 "trtype": "$TEST_TRANSPORT", 00:28:21.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.601 "adrfam": "ipv4", 00:28:21.601 "trsvcid": "$NVMF_PORT", 00:28:21.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.601 "hdgst": ${hdgst:-false}, 00:28:21.602 "ddgst": ${ddgst:-false} 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 } 00:28:21.602 EOF 00:28:21.602 )") 00:28:21.602 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:21.602 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.602 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.602 { 00:28:21.602 "params": { 00:28:21.602 "name": "Nvme$subsystem", 00:28:21.602 "trtype": "$TEST_TRANSPORT", 00:28:21.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.602 "adrfam": "ipv4", 00:28:21.602 "trsvcid": "$NVMF_PORT", 00:28:21.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.602 "hdgst": ${hdgst:-false}, 00:28:21.602 "ddgst": ${ddgst:-false} 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 } 00:28:21.602 EOF 00:28:21.602 )") 00:28:21.602 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:21.602 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:21.602 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:21.602 18:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:21.602 "params": { 00:28:21.602 "name": "Nvme1", 00:28:21.602 "trtype": "tcp", 00:28:21.602 "traddr": "10.0.0.2", 00:28:21.602 "adrfam": "ipv4", 00:28:21.602 "trsvcid": "4420", 00:28:21.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:21.602 "hdgst": false, 00:28:21.602 "ddgst": false 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 },{ 00:28:21.602 "params": { 00:28:21.602 "name": "Nvme2", 00:28:21.602 "trtype": "tcp", 00:28:21.602 "traddr": "10.0.0.2", 00:28:21.602 "adrfam": "ipv4", 00:28:21.602 "trsvcid": "4420", 00:28:21.602 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:21.602 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:21.602 "hdgst": false, 00:28:21.602 "ddgst": false 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 },{ 00:28:21.602 "params": { 00:28:21.602 "name": "Nvme3", 00:28:21.602 "trtype": "tcp", 00:28:21.602 "traddr": "10.0.0.2", 00:28:21.602 "adrfam": "ipv4", 00:28:21.602 "trsvcid": "4420", 00:28:21.602 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:21.602 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:21.602 "hdgst": false, 00:28:21.602 "ddgst": false 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 },{ 00:28:21.602 "params": { 00:28:21.602 "name": "Nvme4", 00:28:21.602 "trtype": "tcp", 00:28:21.602 "traddr": "10.0.0.2", 00:28:21.602 "adrfam": "ipv4", 00:28:21.602 "trsvcid": "4420", 00:28:21.602 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:21.602 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:21.602 "hdgst": false, 00:28:21.602 "ddgst": false 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 },{ 00:28:21.602 "params": { 00:28:21.602 "name": "Nvme5", 00:28:21.602 "trtype": "tcp", 00:28:21.602 "traddr": "10.0.0.2", 00:28:21.602 "adrfam": "ipv4", 00:28:21.602 "trsvcid": "4420", 00:28:21.602 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:21.602 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:21.602 "hdgst": false, 00:28:21.602 "ddgst": false 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 },{ 00:28:21.602 "params": { 00:28:21.602 "name": "Nvme6", 00:28:21.602 "trtype": "tcp", 00:28:21.602 "traddr": "10.0.0.2", 00:28:21.602 "adrfam": "ipv4", 00:28:21.602 "trsvcid": "4420", 00:28:21.602 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:21.602 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:21.602 "hdgst": false, 00:28:21.602 "ddgst": false 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 },{ 00:28:21.602 "params": { 00:28:21.602 "name": "Nvme7", 00:28:21.602 "trtype": "tcp", 00:28:21.602 "traddr": "10.0.0.2", 00:28:21.602 "adrfam": "ipv4", 00:28:21.602 "trsvcid": "4420", 00:28:21.602 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:21.602 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:21.602 "hdgst": false, 00:28:21.602 "ddgst": false 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 },{ 00:28:21.602 "params": { 00:28:21.602 "name": "Nvme8", 00:28:21.602 "trtype": "tcp", 00:28:21.602 "traddr": "10.0.0.2", 00:28:21.602 "adrfam": "ipv4", 00:28:21.602 "trsvcid": "4420", 00:28:21.602 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:21.602 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:21.602 "hdgst": false, 00:28:21.602 "ddgst": false 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 },{ 00:28:21.602 "params": { 00:28:21.602 "name": "Nvme9", 00:28:21.602 "trtype": "tcp", 00:28:21.602 "traddr": "10.0.0.2", 00:28:21.602 "adrfam": "ipv4", 00:28:21.602 "trsvcid": "4420", 00:28:21.602 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:21.602 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:21.602 "hdgst": false, 00:28:21.602 "ddgst": false 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 },{ 00:28:21.602 "params": { 00:28:21.602 "name": "Nvme10", 00:28:21.602 "trtype": "tcp", 00:28:21.602 "traddr": "10.0.0.2", 00:28:21.602 "adrfam": "ipv4", 00:28:21.602 "trsvcid": "4420", 00:28:21.602 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:21.602 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:21.602 "hdgst": false, 00:28:21.602 "ddgst": false 00:28:21.602 }, 00:28:21.602 "method": "bdev_nvme_attach_controller" 00:28:21.602 }' 00:28:21.602 [2024-11-17 18:49:07.815246] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:21.602 [2024-11-17 18:49:07.815337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814943 ] 00:28:21.602 [2024-11-17 18:49:07.889795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.602 [2024-11-17 18:49:07.938913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.975 Running I/O for 1 seconds... 00:28:24.166 1741.00 IOPS, 108.81 MiB/s 00:28:24.166 Latency(us) 00:28:24.166 [2024-11-17T17:49:10.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.166 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.166 Verification LBA range: start 0x0 length 0x400 00:28:24.166 Nvme1n1 : 1.12 227.70 14.23 0.00 0.00 278299.69 18835.53 257872.02 00:28:24.166 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.166 Verification LBA range: start 0x0 length 0x400 00:28:24.166 Nvme2n1 : 1.13 226.26 14.14 0.00 0.00 275022.13 21262.79 256318.58 00:28:24.166 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.166 Verification LBA range: start 0x0 length 0x400 00:28:24.166 Nvme3n1 : 1.11 234.85 14.68 0.00 0.00 254572.26 17864.63 250104.79 00:28:24.166 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.166 Verification LBA range: start 0x0 length 0x400 00:28:24.166 Nvme4n1 : 1.11 231.61 14.48 0.00 0.00 259799.80 18350.08 279620.27 00:28:24.166 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.166 Verification LBA range: start 0x0 length 0x400 00:28:24.166 Nvme5n1 : 1.14 233.10 14.57 0.00 0.00 253494.39 3276.80 245444.46 00:28:24.166 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.166 Verification LBA range: start 0x0 length 0x400 00:28:24.166 Nvme6n1 : 1.15 226.61 14.16 0.00 0.00 256955.17 2597.17 274959.93 00:28:24.166 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.166 Verification LBA range: start 0x0 length 0x400 00:28:24.166 Nvme7n1 : 1.12 228.81 14.30 0.00 0.00 249454.74 28738.75 260978.92 00:28:24.166 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.166 Verification LBA range: start 0x0 length 0x400 00:28:24.166 Nvme8n1 : 1.14 224.52 14.03 0.00 0.00 250312.82 16990.81 265639.25 00:28:24.167 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.167 Verification LBA range: start 0x0 length 0x400 00:28:24.167 Nvme9n1 : 1.15 222.44 13.90 0.00 0.00 248498.44 21651.15 284280.60 00:28:24.167 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:24.167 Verification LBA range: start 0x0 length 0x400 00:28:24.167 Nvme10n1 : 1.19 268.97 16.81 0.00 0.00 202908.99 2281.62 270299.59 00:28:24.167 [2024-11-17T17:49:10.743Z] =================================================================================================================== 00:28:24.167 [2024-11-17T17:49:10.743Z] Total : 2324.87 145.30 0.00 0.00 251730.12 2281.62 284280.60 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:24.425 rmmod nvme_tcp 00:28:24.425 rmmod nvme_fabrics 00:28:24.425 rmmod nvme_keyring 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 814349 ']' 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 814349 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 814349 ']' 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 814349 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 814349 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 814349' 00:28:24.425 killing process with pid 814349 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 814349 00:28:24.425 18:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 814349 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.994 18:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.898 00:28:26.898 real 0m11.887s 00:28:26.898 user 0m34.503s 00:28:26.898 sys 0m3.246s 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:26.898 ************************************ 00:28:26.898 END TEST nvmf_shutdown_tc1 00:28:26.898 ************************************ 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:26.898 ************************************ 00:28:26.898 START TEST nvmf_shutdown_tc2 00:28:26.898 ************************************ 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.898 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:26.899 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:26.899 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:26.899 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:26.899 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.899 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:27.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:28:27.158 00:28:27.158 --- 10.0.0.2 ping statistics --- 00:28:27.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.158 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:28:27.158 00:28:27.158 --- 10.0.0.1 ping statistics --- 00:28:27.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.158 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=815714 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 815714 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 815714 ']' 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.158 18:49:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.417 [2024-11-17 18:49:13.774198] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:27.417 [2024-11-17 18:49:13.774271] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.417 [2024-11-17 18:49:13.847136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:27.417 [2024-11-17 18:49:13.896411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.417 [2024-11-17 18:49:13.896486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.417 [2024-11-17 18:49:13.896500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.417 [2024-11-17 18:49:13.896510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.417 [2024-11-17 18:49:13.896520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.417 [2024-11-17 18:49:13.898192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.417 [2024-11-17 18:49:13.898257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.417 [2024-11-17 18:49:13.898322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:27.417 [2024-11-17 18:49:13.898325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.675 [2024-11-17 18:49:14.053849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.675 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.676 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.676 Malloc1 00:28:27.676 [2024-11-17 18:49:14.157349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.676 Malloc2 00:28:27.676 Malloc3 00:28:27.934 Malloc4 00:28:27.934 Malloc5 00:28:27.934 Malloc6 00:28:27.934 Malloc7 00:28:27.934 Malloc8 00:28:28.201 Malloc9 00:28:28.201 Malloc10 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=815893 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 815893 /var/tmp/bdevperf.sock 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 815893 ']' 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:28.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.201 { 00:28:28.201 "params": { 00:28:28.201 "name": "Nvme$subsystem", 00:28:28.201 "trtype": "$TEST_TRANSPORT", 00:28:28.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.201 "adrfam": "ipv4", 00:28:28.201 "trsvcid": "$NVMF_PORT", 00:28:28.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.201 "hdgst": ${hdgst:-false}, 00:28:28.201 "ddgst": ${ddgst:-false} 00:28:28.201 }, 00:28:28.201 "method": "bdev_nvme_attach_controller" 00:28:28.201 } 00:28:28.201 EOF 00:28:28.201 )") 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.201 { 00:28:28.201 "params": { 00:28:28.201 "name": "Nvme$subsystem", 00:28:28.201 "trtype": "$TEST_TRANSPORT", 00:28:28.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.201 "adrfam": "ipv4", 00:28:28.201 "trsvcid": "$NVMF_PORT", 00:28:28.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.201 "hdgst": ${hdgst:-false}, 00:28:28.201 "ddgst": ${ddgst:-false} 00:28:28.201 }, 00:28:28.201 "method": "bdev_nvme_attach_controller" 00:28:28.201 } 00:28:28.201 EOF 00:28:28.201 )") 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:28.201 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.202 { 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme$subsystem", 00:28:28.202 "trtype": "$TEST_TRANSPORT", 00:28:28.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "$NVMF_PORT", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.202 "hdgst": ${hdgst:-false}, 00:28:28.202 "ddgst": ${ddgst:-false} 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 } 00:28:28.202 EOF 00:28:28.202 )") 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.202 { 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme$subsystem", 00:28:28.202 "trtype": "$TEST_TRANSPORT", 00:28:28.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "$NVMF_PORT", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.202 "hdgst": ${hdgst:-false}, 00:28:28.202 "ddgst": ${ddgst:-false} 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 } 00:28:28.202 EOF 00:28:28.202 )") 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.202 { 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme$subsystem", 00:28:28.202 "trtype": "$TEST_TRANSPORT", 00:28:28.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "$NVMF_PORT", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.202 "hdgst": ${hdgst:-false}, 00:28:28.202 "ddgst": ${ddgst:-false} 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 } 00:28:28.202 EOF 00:28:28.202 )") 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.202 { 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme$subsystem", 00:28:28.202 "trtype": "$TEST_TRANSPORT", 00:28:28.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "$NVMF_PORT", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.202 "hdgst": ${hdgst:-false}, 00:28:28.202 "ddgst": ${ddgst:-false} 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 } 00:28:28.202 EOF 00:28:28.202 )") 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.202 { 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme$subsystem", 00:28:28.202 "trtype": "$TEST_TRANSPORT", 00:28:28.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "$NVMF_PORT", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.202 "hdgst": ${hdgst:-false}, 00:28:28.202 "ddgst": ${ddgst:-false} 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 } 00:28:28.202 EOF 00:28:28.202 )") 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.202 { 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme$subsystem", 00:28:28.202 "trtype": "$TEST_TRANSPORT", 00:28:28.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "$NVMF_PORT", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.202 "hdgst": ${hdgst:-false}, 00:28:28.202 "ddgst": ${ddgst:-false} 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 } 00:28:28.202 EOF 00:28:28.202 )") 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.202 { 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme$subsystem", 00:28:28.202 "trtype": "$TEST_TRANSPORT", 00:28:28.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "$NVMF_PORT", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.202 "hdgst": ${hdgst:-false}, 00:28:28.202 "ddgst": ${ddgst:-false} 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 } 00:28:28.202 EOF 00:28:28.202 )") 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.202 { 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme$subsystem", 00:28:28.202 "trtype": "$TEST_TRANSPORT", 00:28:28.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "$NVMF_PORT", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.202 "hdgst": ${hdgst:-false}, 00:28:28.202 "ddgst": ${ddgst:-false} 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 } 00:28:28.202 EOF 00:28:28.202 )") 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:28.202 18:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme1", 00:28:28.202 "trtype": "tcp", 00:28:28.202 "traddr": "10.0.0.2", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "4420", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.202 "hdgst": false, 00:28:28.202 "ddgst": false 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 },{ 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme2", 00:28:28.202 "trtype": "tcp", 00:28:28.202 "traddr": "10.0.0.2", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "4420", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:28.202 "hdgst": false, 00:28:28.202 "ddgst": false 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 },{ 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme3", 00:28:28.202 "trtype": "tcp", 00:28:28.202 "traddr": "10.0.0.2", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "4420", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:28.202 "hdgst": false, 00:28:28.202 "ddgst": false 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 },{ 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme4", 00:28:28.202 "trtype": "tcp", 00:28:28.202 "traddr": "10.0.0.2", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "4420", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:28.202 "hdgst": false, 00:28:28.202 "ddgst": false 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 },{ 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme5", 00:28:28.202 "trtype": "tcp", 00:28:28.202 "traddr": "10.0.0.2", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "4420", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:28.202 "hdgst": false, 00:28:28.202 "ddgst": false 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 },{ 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme6", 00:28:28.202 "trtype": "tcp", 00:28:28.202 "traddr": "10.0.0.2", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "4420", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:28.202 "hdgst": false, 00:28:28.202 "ddgst": false 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 },{ 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme7", 00:28:28.202 "trtype": "tcp", 00:28:28.202 "traddr": "10.0.0.2", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "4420", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:28.202 "hdgst": false, 00:28:28.202 "ddgst": false 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 },{ 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme8", 00:28:28.202 "trtype": "tcp", 00:28:28.202 "traddr": "10.0.0.2", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "4420", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:28.202 "hdgst": false, 00:28:28.202 "ddgst": false 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 },{ 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme9", 00:28:28.202 "trtype": "tcp", 00:28:28.202 "traddr": "10.0.0.2", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "4420", 00:28:28.202 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:28.202 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:28.202 "hdgst": false, 00:28:28.202 "ddgst": false 00:28:28.202 }, 00:28:28.202 "method": "bdev_nvme_attach_controller" 00:28:28.202 },{ 00:28:28.202 "params": { 00:28:28.202 "name": "Nvme10", 00:28:28.202 "trtype": "tcp", 00:28:28.202 "traddr": "10.0.0.2", 00:28:28.202 "adrfam": "ipv4", 00:28:28.202 "trsvcid": "4420", 00:28:28.203 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:28.203 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:28.203 "hdgst": false, 00:28:28.203 "ddgst": false 00:28:28.203 }, 00:28:28.203 "method": "bdev_nvme_attach_controller" 00:28:28.203 }' 00:28:28.203 [2024-11-17 18:49:14.663054] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:28.203 [2024-11-17 18:49:14.663134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid815893 ] 00:28:28.203 [2024-11-17 18:49:14.738000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.466 [2024-11-17 18:49:14.786141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.365 Running I/O for 10 seconds... 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:30.365 18:49:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:30.624 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:30.624 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:30.624 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:30.624 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:30.624 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.624 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.624 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.624 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:30.624 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:30.624 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 815893 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 815893 ']' 00:28:30.882 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 815893 00:28:30.883 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:30.883 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.883 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 815893 00:28:30.883 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.883 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.883 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 815893' 00:28:30.883 killing process with pid 815893 00:28:30.883 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 815893 00:28:30.883 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 815893 00:28:31.141 Received shutdown signal, test time was about 0.963782 seconds 00:28:31.141 00:28:31.141 Latency(us) 00:28:31.141 [2024-11-17T17:49:17.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.141 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.141 Verification LBA range: start 0x0 length 0x400 00:28:31.141 Nvme1n1 : 0.89 215.25 13.45 0.00 0.00 293752.98 21942.42 236123.78 00:28:31.141 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.141 Verification LBA range: start 0x0 length 0x400 00:28:31.141 Nvme2n1 : 0.93 275.36 17.21 0.00 0.00 225074.82 18835.53 256318.58 00:28:31.141 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.141 Verification LBA range: start 0x0 length 0x400 00:28:31.141 Nvme3n1 : 0.96 265.85 16.62 0.00 0.00 219526.26 25049.32 245444.46 00:28:31.141 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.141 Verification LBA range: start 0x0 length 0x400 00:28:31.141 Nvme4n1 : 0.93 276.50 17.28 0.00 0.00 214379.14 17476.27 256318.58 00:28:31.141 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.141 Verification LBA range: start 0x0 length 0x400 00:28:31.141 Nvme5n1 : 0.88 218.15 13.63 0.00 0.00 265372.51 18155.90 254765.13 00:28:31.141 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.141 Verification LBA range: start 0x0 length 0x400 00:28:31.141 Nvme6n1 : 0.90 214.31 13.39 0.00 0.00 264721.19 21068.61 237677.23 00:28:31.141 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.141 Verification LBA range: start 0x0 length 0x400 00:28:31.141 Nvme7n1 : 0.91 211.22 13.20 0.00 0.00 263044.87 22136.60 256318.58 00:28:31.141 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.141 Verification LBA range: start 0x0 length 0x400 00:28:31.141 Nvme8n1 : 0.90 212.81 13.30 0.00 0.00 255084.97 20680.25 259425.47 00:28:31.141 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.141 Verification LBA range: start 0x0 length 0x400 00:28:31.141 Nvme9n1 : 0.92 209.77 13.11 0.00 0.00 253668.06 20680.25 259425.47 00:28:31.141 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:31.141 Verification LBA range: start 0x0 length 0x400 00:28:31.141 Nvme10n1 : 0.92 209.04 13.06 0.00 0.00 249017.58 20194.80 284280.60 00:28:31.141 [2024-11-17T17:49:17.717Z] =================================================================================================================== 00:28:31.141 [2024-11-17T17:49:17.717Z] Total : 2308.26 144.27 0.00 0.00 247572.95 17476.27 284280.60 00:28:31.141 18:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 815714 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:32.515 rmmod nvme_tcp 00:28:32.515 rmmod nvme_fabrics 00:28:32.515 rmmod nvme_keyring 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 815714 ']' 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 815714 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 815714 ']' 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 815714 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 815714 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 815714' 00:28:32.515 killing process with pid 815714 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 815714 00:28:32.515 18:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 815714 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.849 18:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.778 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:34.778 00:28:34.778 real 0m7.919s 00:28:34.778 user 0m24.109s 00:28:34.778 sys 0m1.525s 00:28:34.778 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.778 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.778 ************************************ 00:28:34.778 END TEST nvmf_shutdown_tc2 00:28:34.778 ************************************ 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:35.038 ************************************ 00:28:35.038 START TEST nvmf_shutdown_tc3 00:28:35.038 ************************************ 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:35.038 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:35.039 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:35.039 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:35.039 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:35.039 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:35.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:28:35.039 00:28:35.039 --- 10.0.0.2 ping statistics --- 00:28:35.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.039 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:28:35.039 00:28:35.039 --- 10.0.0.1 ping statistics --- 00:28:35.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.039 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:35.039 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=816810 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 816810 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 816810 ']' 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.298 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.298 [2024-11-17 18:49:21.684360] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:35.298 [2024-11-17 18:49:21.684443] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.298 [2024-11-17 18:49:21.759167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:35.298 [2024-11-17 18:49:21.802907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.298 [2024-11-17 18:49:21.802960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.298 [2024-11-17 18:49:21.802987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.298 [2024-11-17 18:49:21.802998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.298 [2024-11-17 18:49:21.803007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.298 [2024-11-17 18:49:21.804426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.298 [2024-11-17 18:49:21.804532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.298 [2024-11-17 18:49:21.804617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:35.298 [2024-11-17 18:49:21.804620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.557 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.557 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:35.557 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:35.557 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.557 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.557 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.557 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.557 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.557 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.557 [2024-11-17 18:49:21.947868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.558 18:49:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.558 Malloc1 00:28:35.558 [2024-11-17 18:49:22.036563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.558 Malloc2 00:28:35.558 Malloc3 00:28:35.815 Malloc4 00:28:35.815 Malloc5 00:28:35.815 Malloc6 00:28:35.815 Malloc7 00:28:35.815 Malloc8 00:28:36.074 Malloc9 00:28:36.074 Malloc10 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=816989 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 816989 /var/tmp/bdevperf.sock 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 816989 ']' 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:36.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.074 { 00:28:36.074 "params": { 00:28:36.074 "name": "Nvme$subsystem", 00:28:36.074 "trtype": "$TEST_TRANSPORT", 00:28:36.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.074 "adrfam": "ipv4", 00:28:36.074 "trsvcid": "$NVMF_PORT", 00:28:36.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.074 "hdgst": ${hdgst:-false}, 00:28:36.074 "ddgst": ${ddgst:-false} 00:28:36.074 }, 00:28:36.074 "method": "bdev_nvme_attach_controller" 00:28:36.074 } 00:28:36.074 EOF 00:28:36.074 )") 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.074 { 00:28:36.074 "params": { 00:28:36.074 "name": "Nvme$subsystem", 00:28:36.074 "trtype": "$TEST_TRANSPORT", 00:28:36.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.074 "adrfam": "ipv4", 00:28:36.074 "trsvcid": "$NVMF_PORT", 00:28:36.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.074 "hdgst": ${hdgst:-false}, 00:28:36.074 "ddgst": ${ddgst:-false} 00:28:36.074 }, 00:28:36.074 "method": "bdev_nvme_attach_controller" 00:28:36.074 } 00:28:36.074 EOF 00:28:36.074 )") 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.074 { 00:28:36.074 "params": { 00:28:36.074 "name": "Nvme$subsystem", 00:28:36.074 "trtype": "$TEST_TRANSPORT", 00:28:36.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.074 "adrfam": "ipv4", 00:28:36.074 "trsvcid": "$NVMF_PORT", 00:28:36.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.074 "hdgst": ${hdgst:-false}, 00:28:36.074 "ddgst": ${ddgst:-false} 00:28:36.074 }, 00:28:36.074 "method": "bdev_nvme_attach_controller" 00:28:36.074 } 00:28:36.074 EOF 00:28:36.074 )") 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.074 { 00:28:36.074 "params": { 00:28:36.074 "name": "Nvme$subsystem", 00:28:36.074 "trtype": "$TEST_TRANSPORT", 00:28:36.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.074 "adrfam": "ipv4", 00:28:36.074 "trsvcid": "$NVMF_PORT", 00:28:36.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.074 "hdgst": ${hdgst:-false}, 00:28:36.074 "ddgst": ${ddgst:-false} 00:28:36.074 }, 00:28:36.074 "method": "bdev_nvme_attach_controller" 00:28:36.074 } 00:28:36.074 EOF 00:28:36.074 )") 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.074 { 00:28:36.074 "params": { 00:28:36.074 "name": "Nvme$subsystem", 00:28:36.074 "trtype": "$TEST_TRANSPORT", 00:28:36.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.074 "adrfam": "ipv4", 00:28:36.074 "trsvcid": "$NVMF_PORT", 00:28:36.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.074 "hdgst": ${hdgst:-false}, 00:28:36.074 "ddgst": ${ddgst:-false} 00:28:36.074 }, 00:28:36.074 "method": "bdev_nvme_attach_controller" 00:28:36.074 } 00:28:36.074 EOF 00:28:36.074 )") 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.074 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.074 { 00:28:36.074 "params": { 00:28:36.074 "name": "Nvme$subsystem", 00:28:36.074 "trtype": "$TEST_TRANSPORT", 00:28:36.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.074 "adrfam": "ipv4", 00:28:36.074 "trsvcid": "$NVMF_PORT", 00:28:36.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.074 "hdgst": ${hdgst:-false}, 00:28:36.074 "ddgst": ${ddgst:-false} 00:28:36.074 }, 00:28:36.074 "method": "bdev_nvme_attach_controller" 00:28:36.074 } 00:28:36.074 EOF 00:28:36.074 )") 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.075 { 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme$subsystem", 00:28:36.075 "trtype": "$TEST_TRANSPORT", 00:28:36.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.075 "adrfam": "ipv4", 00:28:36.075 "trsvcid": "$NVMF_PORT", 00:28:36.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.075 "hdgst": ${hdgst:-false}, 00:28:36.075 "ddgst": ${ddgst:-false} 00:28:36.075 }, 00:28:36.075 "method": "bdev_nvme_attach_controller" 00:28:36.075 } 00:28:36.075 EOF 00:28:36.075 )") 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.075 { 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme$subsystem", 00:28:36.075 "trtype": "$TEST_TRANSPORT", 00:28:36.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.075 "adrfam": "ipv4", 00:28:36.075 "trsvcid": "$NVMF_PORT", 00:28:36.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.075 "hdgst": ${hdgst:-false}, 00:28:36.075 "ddgst": ${ddgst:-false} 00:28:36.075 }, 00:28:36.075 "method": "bdev_nvme_attach_controller" 00:28:36.075 } 00:28:36.075 EOF 00:28:36.075 )") 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.075 { 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme$subsystem", 00:28:36.075 "trtype": "$TEST_TRANSPORT", 00:28:36.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.075 "adrfam": "ipv4", 00:28:36.075 "trsvcid": "$NVMF_PORT", 00:28:36.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.075 "hdgst": ${hdgst:-false}, 00:28:36.075 "ddgst": ${ddgst:-false} 00:28:36.075 }, 00:28:36.075 "method": "bdev_nvme_attach_controller" 00:28:36.075 } 00:28:36.075 EOF 00:28:36.075 )") 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:36.075 { 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme$subsystem", 00:28:36.075 "trtype": "$TEST_TRANSPORT", 00:28:36.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.075 "adrfam": "ipv4", 00:28:36.075 "trsvcid": "$NVMF_PORT", 00:28:36.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.075 "hdgst": ${hdgst:-false}, 00:28:36.075 "ddgst": ${ddgst:-false} 00:28:36.075 }, 00:28:36.075 "method": "bdev_nvme_attach_controller" 00:28:36.075 } 00:28:36.075 EOF 00:28:36.075 )") 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:36.075 18:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme1", 00:28:36.075 "trtype": "tcp", 00:28:36.075 "traddr": "10.0.0.2", 00:28:36.075 "adrfam": "ipv4", 00:28:36.075 "trsvcid": "4420", 00:28:36.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.075 "hdgst": false, 00:28:36.075 "ddgst": false 00:28:36.075 }, 00:28:36.075 "method": "bdev_nvme_attach_controller" 00:28:36.075 },{ 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme2", 00:28:36.075 "trtype": "tcp", 00:28:36.075 "traddr": "10.0.0.2", 00:28:36.075 "adrfam": "ipv4", 00:28:36.075 "trsvcid": "4420", 00:28:36.075 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:36.075 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:36.075 "hdgst": false, 00:28:36.075 "ddgst": false 00:28:36.075 }, 00:28:36.075 "method": "bdev_nvme_attach_controller" 00:28:36.075 },{ 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme3", 00:28:36.075 "trtype": "tcp", 00:28:36.075 "traddr": "10.0.0.2", 00:28:36.075 "adrfam": "ipv4", 00:28:36.075 "trsvcid": "4420", 00:28:36.075 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:36.075 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:36.075 "hdgst": false, 00:28:36.075 "ddgst": false 00:28:36.075 }, 00:28:36.075 "method": "bdev_nvme_attach_controller" 00:28:36.075 },{ 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme4", 00:28:36.075 "trtype": "tcp", 00:28:36.075 "traddr": "10.0.0.2", 00:28:36.075 "adrfam": "ipv4", 00:28:36.075 "trsvcid": "4420", 00:28:36.075 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:36.075 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:36.075 "hdgst": false, 00:28:36.075 "ddgst": false 00:28:36.075 }, 00:28:36.075 "method": "bdev_nvme_attach_controller" 00:28:36.075 },{ 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme5", 00:28:36.075 "trtype": "tcp", 00:28:36.075 "traddr": "10.0.0.2", 00:28:36.075 "adrfam": "ipv4", 00:28:36.075 "trsvcid": "4420", 00:28:36.075 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:36.075 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:36.075 "hdgst": false, 00:28:36.075 "ddgst": false 00:28:36.075 }, 00:28:36.075 "method": "bdev_nvme_attach_controller" 00:28:36.075 },{ 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme6", 00:28:36.075 "trtype": "tcp", 00:28:36.075 "traddr": "10.0.0.2", 00:28:36.075 "adrfam": "ipv4", 00:28:36.075 "trsvcid": "4420", 00:28:36.075 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:36.075 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:36.075 "hdgst": false, 00:28:36.075 "ddgst": false 00:28:36.075 }, 00:28:36.075 "method": "bdev_nvme_attach_controller" 00:28:36.075 },{ 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme7", 00:28:36.075 "trtype": "tcp", 00:28:36.075 "traddr": "10.0.0.2", 00:28:36.075 "adrfam": "ipv4", 00:28:36.075 "trsvcid": "4420", 00:28:36.075 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:36.075 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:36.075 "hdgst": false, 00:28:36.075 "ddgst": false 00:28:36.075 }, 00:28:36.075 "method": "bdev_nvme_attach_controller" 00:28:36.075 },{ 00:28:36.075 "params": { 00:28:36.075 "name": "Nvme8", 00:28:36.075 "trtype": "tcp", 00:28:36.076 "traddr": "10.0.0.2", 00:28:36.076 "adrfam": "ipv4", 00:28:36.076 "trsvcid": "4420", 00:28:36.076 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:36.076 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:36.076 "hdgst": false, 00:28:36.076 "ddgst": false 00:28:36.076 }, 00:28:36.076 "method": "bdev_nvme_attach_controller" 00:28:36.076 },{ 00:28:36.076 "params": { 00:28:36.076 "name": "Nvme9", 00:28:36.076 "trtype": "tcp", 00:28:36.076 "traddr": "10.0.0.2", 00:28:36.076 "adrfam": "ipv4", 00:28:36.076 "trsvcid": "4420", 00:28:36.076 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:36.076 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:36.076 "hdgst": false, 00:28:36.076 "ddgst": false 00:28:36.076 }, 00:28:36.076 "method": "bdev_nvme_attach_controller" 00:28:36.076 },{ 00:28:36.076 "params": { 00:28:36.076 "name": "Nvme10", 00:28:36.076 "trtype": "tcp", 00:28:36.076 "traddr": "10.0.0.2", 00:28:36.076 "adrfam": "ipv4", 00:28:36.076 "trsvcid": "4420", 00:28:36.076 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:36.076 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:36.076 "hdgst": false, 00:28:36.076 "ddgst": false 00:28:36.076 }, 00:28:36.076 "method": "bdev_nvme_attach_controller" 00:28:36.076 }' 00:28:36.076 [2024-11-17 18:49:22.535877] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:36.076 [2024-11-17 18:49:22.535960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid816989 ] 00:28:36.076 [2024-11-17 18:49:22.609468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.334 [2024-11-17 18:49:22.657025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.706 Running I/O for 10 seconds... 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:38.272 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 816810 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 816810 ']' 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 816810 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 816810 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 816810' 00:28:38.545 killing process with pid 816810 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 816810 00:28:38.545 18:49:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 816810 00:28:38.545 [2024-11-17 18:49:24.945161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.545 [2024-11-17 18:49:24.945495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.945998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.946011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.946026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.946039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.946052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.946064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4070 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.947990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.948002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.948014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.948025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.948037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.948049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.546 [2024-11-17 18:49:24.948061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.948329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.949862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.949890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.949908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.949921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with t[2024-11-17 18:49:24.949908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(6) to be set 00:28:38.547 id:0 cdw10:00000000 cdw11:00000000 00:28:38.547 [2024-11-17 18:49:24.949938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.949948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-17 18:49:24.949951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.547 he state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.949966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.949968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.547 [2024-11-17 18:49:24.949981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.949983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.547 [2024-11-17 18:49:24.949994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.949998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.547 [2024-11-17 18:49:24.950007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with t[2024-11-17 18:49:24.950012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:28:38.547 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.547 [2024-11-17 18:49:24.950026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with t[2024-11-17 18:49:24.950028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(6) to be set 00:28:38.547 id:0 cdw10:00000000 cdw11:00000000 00:28:38.547 [2024-11-17 18:49:24.950041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with t[2024-11-17 18:49:24.950043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:28:38.547 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.547 [2024-11-17 18:49:24.950056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260140 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.547 [2024-11-17 18:49:24.950141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-17 18:49:24.950154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.547 he state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.547 [2024-11-17 18:49:24.950181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.547 [2024-11-17 18:49:24.950194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.547 [2024-11-17 18:49:24.950206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.547 [2024-11-17 18:49:24.950219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-11-17 18:49:24.950232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with tid:0 cdw10:00000000 cdw11:00000000 00:28:38.547 he state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with t[2024-11-17 18:49:24.950247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:28:38.547 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.547 [2024-11-17 18:49:24.950262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with t[2024-11-17 18:49:24.950263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df8620 is same he state(6) to be set 00:28:38.547 with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.547 [2024-11-17 18:49:24.950442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.950723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4540 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.952913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.952946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.952961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.952979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.952991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.548 [2024-11-17 18:49:24.953761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.953774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4a10 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.955539] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.549 [2024-11-17 18:49:24.955624] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.549 [2024-11-17 18:49:24.956323] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.549 [2024-11-17 18:49:24.958923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.958957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.958980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.958993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.959725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4f00 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.549 [2024-11-17 18:49:24.966700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.966998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.967299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5d70 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.550 [2024-11-17 18:49:24.968713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.968995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6260 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.969993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.551 [2024-11-17 18:49:24.970523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.970785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46de0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.976266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2223aa0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.976456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2260140 (9): Bad file descriptor 00:28:38.552 [2024-11-17 18:49:24.976519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260b70 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.976664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df8620 (9): Bad file descriptor 00:28:38.552 [2024-11-17 18:49:24.976725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221b790 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.976896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.976985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.976997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.977009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d03610 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.977056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.977077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.977091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.977104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.977117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.977130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.977144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.977157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.977169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22288b0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.977216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.977237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.977251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.977271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.977285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.977298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.977311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.977324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.977337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df81c0 is same with the state(6) to be set 00:28:38.552 [2024-11-17 18:49:24.977381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.977401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.977415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.552 [2024-11-17 18:49:24.977428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.552 [2024-11-17 18:49:24.977442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.553 [2024-11-17 18:49:24.977455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.977469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.553 [2024-11-17 18:49:24.977481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.977494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deddd0 is same with the state(6) to be set 00:28:38.553 [2024-11-17 18:49:24.977539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.553 [2024-11-17 18:49:24.977560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.977574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.553 [2024-11-17 18:49:24.977588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.977601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.553 [2024-11-17 18:49:24.977614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.977627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.553 [2024-11-17 18:49:24.977640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.977652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df48d0 is same with the state(6) to be set 00:28:38.553 [2024-11-17 18:49:24.978275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.978975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.978990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.979005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.979019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.979034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.979048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.979063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.979081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.979097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.979111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.979126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.979140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.979156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.979170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.979185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.979199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.979214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.979228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.553 [2024-11-17 18:49:24.979244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.553 [2024-11-17 18:49:24.979259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.979973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.979986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.980002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.980016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.980032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.980046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.980061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.980075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.980091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.980105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.980120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.980133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.980148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.980162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.980179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.980192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.980212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.980226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.980269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.554 [2024-11-17 18:49:24.980410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.554 [2024-11-17 18:49:24.980433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.554 [2024-11-17 18:49:24.980454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.980971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.980987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.555 [2024-11-17 18:49:24.981642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.555 [2024-11-17 18:49:24.981658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.981679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.981698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.981712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.981728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.981743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.981758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.981772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.981787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.981801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.981817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.981831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.981846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.981860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.981876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.981890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.981906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.981925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.981941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.981955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.981970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.981984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.982365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.982379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fa890 is same with the state(6) to be set 00:28:38.556 [2024-11-17 18:49:24.982836] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.556 [2024-11-17 18:49:24.985319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:38.556 [2024-11-17 18:49:24.985354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:38.556 [2024-11-17 18:49:24.985382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22288b0 (9): Bad file descriptor 00:28:38.556 [2024-11-17 18:49:24.985405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221b790 (9): Bad file descriptor 00:28:38.556 [2024-11-17 18:49:24.986423] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.556 [2024-11-17 18:49:24.986549] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.556 [2024-11-17 18:49:24.986702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.556 [2024-11-17 18:49:24.986733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221b790 with addr=10.0.0.2, port=4420 00:28:38.556 [2024-11-17 18:49:24.986751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221b790 is same with the state(6) to be set 00:28:38.556 [2024-11-17 18:49:24.986840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.556 [2024-11-17 18:49:24.986867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22288b0 with addr=10.0.0.2, port=4420 00:28:38.556 [2024-11-17 18:49:24.986884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22288b0 is same with the state(6) to be set 00:28:38.556 [2024-11-17 18:49:24.986905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2223aa0 (9): Bad file descriptor 00:28:38.556 [2024-11-17 18:49:24.986949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2260b70 (9): Bad file descriptor 00:28:38.556 [2024-11-17 18:49:24.986993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d03610 (9): Bad file descriptor 00:28:38.556 [2024-11-17 18:49:24.987029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df81c0 (9): Bad file descriptor 00:28:38.556 [2024-11-17 18:49:24.987060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deddd0 (9): Bad file descriptor 00:28:38.556 [2024-11-17 18:49:24.987090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df48d0 (9): Bad file descriptor 00:28:38.556 [2024-11-17 18:49:24.987193] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:38.556 [2024-11-17 18:49:24.987313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.987338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.987371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.987388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.987405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.987420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.987436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.987450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.987465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.987480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.987496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.987510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.556 [2024-11-17 18:49:24.987526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.556 [2024-11-17 18:49:24.987540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.987980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.987994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.557 [2024-11-17 18:49:24.988697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.557 [2024-11-17 18:49:24.988713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.988727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.988743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.988758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.988774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.988788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.988804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.988817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.988833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.988847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.988862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.988876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.988897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.988912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.988927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.988941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.988956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.988970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.988986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fd440 is same with the state(6) to be set 00:28:38.558 [2024-11-17 18:49:24.989430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221b790 (9): Bad file descriptor 00:28:38.558 [2024-11-17 18:49:24.989457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22288b0 (9): Bad file descriptor 00:28:38.558 [2024-11-17 18:49:24.989532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.989980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.989994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.990010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.990024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.990040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.990054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.990070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.990084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.558 [2024-11-17 18:49:24.990099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.558 [2024-11-17 18:49:24.990113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.990974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.990989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.991004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.991022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.991038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.991052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.991068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.991082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.991097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.991111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.991127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.559 [2024-11-17 18:49:24.991142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.559 [2024-11-17 18:49:24.991158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.991172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.991187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.991201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.991217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.991231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.991246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.991261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.991276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.991290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.991306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.991320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.991336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.991349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.991364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.991378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.991398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.991412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.991427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.991441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.991456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.991470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.991484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffc9b0 is same with the state(6) to be set 00:28:38.560 [2024-11-17 18:49:24.993951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.993977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.993998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.560 [2024-11-17 18:49:24.994789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.560 [2024-11-17 18:49:24.994803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.994819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.994833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.994848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.994862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.994878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.994892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.994908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.994922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.994937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.994951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.994966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.994980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.561 [2024-11-17 18:49:24.995870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.561 [2024-11-17 18:49:24.995884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ffeb0 is same with the state(6) to be set 00:28:38.561 [2024-11-17 18:49:24.997087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:38.561 [2024-11-17 18:49:24.997117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:38.561 [2024-11-17 18:49:24.997136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:38.561 [2024-11-17 18:49:24.997192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:38.561 [2024-11-17 18:49:24.997209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:38.561 [2024-11-17 18:49:24.997224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:38.562 [2024-11-17 18:49:24.997241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:38.562 [2024-11-17 18:49:24.997257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:38.562 [2024-11-17 18:49:24.997269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:38.562 [2024-11-17 18:49:24.997282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:38.562 [2024-11-17 18:49:24.997294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:38.562 [2024-11-17 18:49:24.997634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.562 [2024-11-17 18:49:24.997670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df8620 with addr=10.0.0.2, port=4420 00:28:38.562 [2024-11-17 18:49:24.997704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df8620 is same with the state(6) to be set 00:28:38.562 [2024-11-17 18:49:24.997804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.562 [2024-11-17 18:49:24.997829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2223aa0 with addr=10.0.0.2, port=4420 00:28:38.562 [2024-11-17 18:49:24.997845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2223aa0 is same with the state(6) to be set 00:28:38.562 [2024-11-17 18:49:24.997946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.562 [2024-11-17 18:49:24.997971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2260140 with addr=10.0.0.2, port=4420 00:28:38.562 [2024-11-17 18:49:24.997987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260140 is same with the state(6) to be set 00:28:38.562 [2024-11-17 18:49:24.998308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.998971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.998985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.999001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.999015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.999030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.999049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.999065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.999079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.999095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.999109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.999124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.999138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.999153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.999167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.999183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.999197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.999212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.999226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.999242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.999255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.999270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.999284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.562 [2024-11-17 18:49:24.999300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.562 [2024-11-17 18:49:24.999313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:24.999978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:24.999993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.000007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:25.000022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.000036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:25.000051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.000065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:25.000080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.000093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:25.000109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.000123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:25.000138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.000151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:25.000166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.000184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:25.000201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.000215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:25.000230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.000244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:25.000257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffdaf0 is same with the state(6) to be set 00:28:38.563 [2024-11-17 18:49:25.001496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.001519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:25.001539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.001555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.563 [2024-11-17 18:49:25.001572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.563 [2024-11-17 18:49:25.001586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.001975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.001989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.564 [2024-11-17 18:49:25.002748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.564 [2024-11-17 18:49:25.002764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.002778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.002794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.002807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.002823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.002838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.002853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.002867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.002882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.002895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.002911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.002925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.002945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.002959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.002975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.002989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.003431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.003445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f69a0 is same with the state(6) to be set 00:28:38.565 [2024-11-17 18:49:25.004695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.004718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.004738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.004753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.004769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.004783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.004799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.004813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.004828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.004842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.004858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.004872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.004887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.004900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.004916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.004935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.004951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.004965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.004980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.004995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.005010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.005024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.005040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.005054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.005070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.005084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.005099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.005113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.005129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.005143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.005158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.565 [2024-11-17 18:49:25.005172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.565 [2024-11-17 18:49:25.005187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.005981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.005996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.566 [2024-11-17 18:49:25.006362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.566 [2024-11-17 18:49:25.006376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.006391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.006405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.006420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.006438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.006455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.006468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.006484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.006498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.006513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.006527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.006542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.006555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.006571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.006585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.006601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.006615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.006628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f7d70 is same with the state(6) to be set 00:28:38.567 [2024-11-17 18:49:25.007872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.007895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.007915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.007930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.007946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.007960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.007976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.007990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.567 [2024-11-17 18:49:25.008776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.567 [2024-11-17 18:49:25.008791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.008805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.008825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.008839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.008855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.008869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.008884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.008898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.008913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.008927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.008943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.008957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.008972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.008986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.568 [2024-11-17 18:49:25.009795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.568 [2024-11-17 18:49:25.009810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fbeb0 is same with the state(6) to be set 00:28:38.568 [2024-11-17 18:49:25.011327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.011981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.011996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.569 [2024-11-17 18:49:25.012510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.569 [2024-11-17 18:49:25.012525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.012982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.012997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.013012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.013026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.013042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.013055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.013070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.013084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.013099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.013113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.013128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.013142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.013157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.013171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.013186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.013201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.013216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.013230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.013245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.570 [2024-11-17 18:49:25.013259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.570 [2024-11-17 18:49:25.013273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fe980 is same with the state(6) to be set 00:28:38.570 [2024-11-17 18:49:25.015223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:38.570 [2024-11-17 18:49:25.015259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:38.570 [2024-11-17 18:49:25.015279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:38.570 [2024-11-17 18:49:25.015297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:38.570 [2024-11-17 18:49:25.015316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:38.570 [2024-11-17 18:49:25.015405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df8620 (9): Bad file descriptor 00:28:38.570 [2024-11-17 18:49:25.015431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2223aa0 (9): Bad file descriptor 00:28:38.570 [2024-11-17 18:49:25.015448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2260140 (9): Bad file descriptor 00:28:38.570 [2024-11-17 18:49:25.015505] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:38.570 [2024-11-17 18:49:25.015529] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:38.570 [2024-11-17 18:49:25.015553] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:38.570 [2024-11-17 18:49:25.015572] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:38.570 [2024-11-17 18:49:25.015591] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:38.570 [2024-11-17 18:49:25.015710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:38.570 task offset: 19584 on job bdev=Nvme5n1 fails 00:28:38.570 00:28:38.570 Latency(us) 00:28:38.570 [2024-11-17T17:49:25.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.570 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.570 Job: Nvme1n1 ended in about 0.75 seconds with error 00:28:38.570 Verification LBA range: start 0x0 length 0x400 00:28:38.570 Nvme1n1 : 0.75 170.05 10.63 85.03 0.00 247586.01 18641.35 245444.46 00:28:38.570 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.570 Job: Nvme2n1 ended in about 0.76 seconds with error 00:28:38.570 Verification LBA range: start 0x0 length 0x400 00:28:38.570 Nvme2n1 : 0.76 168.11 10.51 84.05 0.00 244367.36 25631.86 223696.21 00:28:38.570 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.570 Job: Nvme3n1 ended in about 0.76 seconds with error 00:28:38.570 Verification LBA range: start 0x0 length 0x400 00:28:38.570 Nvme3n1 : 0.76 167.41 10.46 83.70 0.00 239176.56 19806.44 253211.69 00:28:38.570 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.570 Job: Nvme4n1 ended in about 0.77 seconds with error 00:28:38.570 Verification LBA range: start 0x0 length 0x400 00:28:38.570 Nvme4n1 : 0.77 166.71 10.42 83.36 0.00 234075.02 18641.35 233016.89 00:28:38.570 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.570 Job: Nvme5n1 ended in about 0.74 seconds with error 00:28:38.570 Verification LBA range: start 0x0 length 0x400 00:28:38.570 Nvme5n1 : 0.74 172.01 10.75 86.01 0.00 219957.92 5024.43 256318.58 00:28:38.570 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.570 Job: Nvme6n1 ended in about 0.75 seconds with error 00:28:38.570 Verification LBA range: start 0x0 length 0x400 00:28:38.570 Nvme6n1 : 0.75 171.77 10.74 85.88 0.00 214137.62 8446.86 264085.81 00:28:38.571 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.571 Job: Nvme7n1 ended in about 0.77 seconds with error 00:28:38.571 Verification LBA range: start 0x0 length 0x400 00:28:38.571 Nvme7n1 : 0.77 166.03 10.38 83.01 0.00 216516.08 25631.86 250104.79 00:28:38.571 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.571 Job: Nvme8n1 ended in about 0.75 seconds with error 00:28:38.571 Verification LBA range: start 0x0 length 0x400 00:28:38.571 Nvme8n1 : 0.75 169.77 10.61 84.89 0.00 204668.02 16893.72 253211.69 00:28:38.571 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.571 Job: Nvme9n1 ended in about 0.77 seconds with error 00:28:38.571 Verification LBA range: start 0x0 length 0x400 00:28:38.571 Nvme9n1 : 0.77 82.64 5.17 82.64 0.00 308088.98 19903.53 284280.60 00:28:38.571 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:38.571 Job: Nvme10n1 ended in about 0.76 seconds with error 00:28:38.571 Verification LBA range: start 0x0 length 0x400 00:28:38.571 Nvme10n1 : 0.76 84.54 5.28 84.54 0.00 290756.46 23495.87 262532.36 00:28:38.571 [2024-11-17T17:49:25.147Z] =================================================================================================================== 00:28:38.571 [2024-11-17T17:49:25.147Z] Total : 1519.03 94.94 843.11 0.00 237826.59 5024.43 284280.60 00:28:38.571 [2024-11-17 18:49:25.042313] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:38.571 [2024-11-17 18:49:25.042400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:38.571 [2024-11-17 18:49:25.042667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-11-17 18:49:25.042708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22288b0 with addr=10.0.0.2, port=4420 00:28:38.571 [2024-11-17 18:49:25.042729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22288b0 is same with the state(6) to be set 00:28:38.571 [2024-11-17 18:49:25.042844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-11-17 18:49:25.042873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221b790 with addr=10.0.0.2, port=4420 00:28:38.571 [2024-11-17 18:49:25.042890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x221b790 is same with the state(6) to be set 00:28:38.571 [2024-11-17 18:49:25.042986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-11-17 18:49:25.043012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df81c0 with addr=10.0.0.2, port=4420 00:28:38.571 [2024-11-17 18:49:25.043029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df81c0 is same with the state(6) to be set 00:28:38.571 [2024-11-17 18:49:25.043122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-11-17 18:49:25.043148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df48d0 with addr=10.0.0.2, port=4420 00:28:38.571 [2024-11-17 18:49:25.043165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df48d0 is same with the state(6) to be set 00:28:38.571 [2024-11-17 18:49:25.043242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-11-17 18:49:25.043269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1deddd0 with addr=10.0.0.2, port=4420 00:28:38.571 [2024-11-17 18:49:25.043285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deddd0 is same with the state(6) to be set 00:28:38.571 [2024-11-17 18:49:25.043301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:38.571 [2024-11-17 18:49:25.043314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:38.571 [2024-11-17 18:49:25.043330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:38.571 [2024-11-17 18:49:25.043349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:38.571 [2024-11-17 18:49:25.043367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:38.571 [2024-11-17 18:49:25.043380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:38.571 [2024-11-17 18:49:25.043394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:38.571 [2024-11-17 18:49:25.043406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:38.571 [2024-11-17 18:49:25.043432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:38.571 [2024-11-17 18:49:25.043446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:38.571 [2024-11-17 18:49:25.043459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:38.571 [2024-11-17 18:49:25.043472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:38.571 [2024-11-17 18:49:25.045018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-11-17 18:49:25.045048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d03610 with addr=10.0.0.2, port=4420 00:28:38.571 [2024-11-17 18:49:25.045065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d03610 is same with the state(6) to be set 00:28:38.571 [2024-11-17 18:49:25.045153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.571 [2024-11-17 18:49:25.045180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2260b70 with addr=10.0.0.2, port=4420 00:28:38.571 [2024-11-17 18:49:25.045197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260b70 is same with the state(6) to be set 00:28:38.571 [2024-11-17 18:49:25.045223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22288b0 (9): Bad file descriptor 00:28:38.571 [2024-11-17 18:49:25.045246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221b790 (9): Bad file descriptor 00:28:38.571 [2024-11-17 18:49:25.045263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df81c0 (9): Bad file descriptor 00:28:38.571 [2024-11-17 18:49:25.045280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df48d0 (9): Bad file descriptor 00:28:38.571 [2024-11-17 18:49:25.045297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deddd0 (9): Bad file descriptor 00:28:38.571 [2024-11-17 18:49:25.045369] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:38.571 [2024-11-17 18:49:25.045396] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:38.571 [2024-11-17 18:49:25.045415] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:38.571 [2024-11-17 18:49:25.045435] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:38.571 [2024-11-17 18:49:25.045455] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:38.571 [2024-11-17 18:49:25.045852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d03610 (9): Bad file descriptor 00:28:38.571 [2024-11-17 18:49:25.045883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2260b70 (9): Bad file descriptor 00:28:38.571 [2024-11-17 18:49:25.045900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:38.571 [2024-11-17 18:49:25.045912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:38.571 [2024-11-17 18:49:25.045926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:38.571 [2024-11-17 18:49:25.045939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:38.571 [2024-11-17 18:49:25.045954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:38.571 [2024-11-17 18:49:25.045966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:38.571 [2024-11-17 18:49:25.045984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:38.571 [2024-11-17 18:49:25.045997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:38.571 [2024-11-17 18:49:25.046012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:38.571 [2024-11-17 18:49:25.046024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:38.571 [2024-11-17 18:49:25.046037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:38.571 [2024-11-17 18:49:25.046049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:38.571 [2024-11-17 18:49:25.046062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:38.571 [2024-11-17 18:49:25.046074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:38.571 [2024-11-17 18:49:25.046087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:38.571 [2024-11-17 18:49:25.046099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:38.571 [2024-11-17 18:49:25.046112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:38.571 [2024-11-17 18:49:25.046125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:38.571 [2024-11-17 18:49:25.046137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:38.571 [2024-11-17 18:49:25.046149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:38.571 [2024-11-17 18:49:25.046236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:38.571 [2024-11-17 18:49:25.046261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:38.571 [2024-11-17 18:49:25.046278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:38.571 [2024-11-17 18:49:25.046317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:38.571 [2024-11-17 18:49:25.046333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:38.571 [2024-11-17 18:49:25.046346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:38.571 [2024-11-17 18:49:25.046360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:38.571 [2024-11-17 18:49:25.046373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:38.571 [2024-11-17 18:49:25.046385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:38.572 [2024-11-17 18:49:25.046397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:38.572 [2024-11-17 18:49:25.046411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:38.572 [2024-11-17 18:49:25.046514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-11-17 18:49:25.046542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2260140 with addr=10.0.0.2, port=4420 00:28:38.572 [2024-11-17 18:49:25.046558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260140 is same with the state(6) to be set 00:28:38.572 [2024-11-17 18:49:25.046649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-11-17 18:49:25.046687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2223aa0 with addr=10.0.0.2, port=4420 00:28:38.572 [2024-11-17 18:49:25.046706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2223aa0 is same with the state(6) to be set 00:28:38.572 [2024-11-17 18:49:25.046780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-11-17 18:49:25.046805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df8620 with addr=10.0.0.2, port=4420 00:28:38.572 [2024-11-17 18:49:25.046820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df8620 is same with the state(6) to be set 00:28:38.572 [2024-11-17 18:49:25.046865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2260140 (9): Bad file descriptor 00:28:38.572 [2024-11-17 18:49:25.046889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2223aa0 (9): Bad file descriptor 00:28:38.572 [2024-11-17 18:49:25.046907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df8620 (9): Bad file descriptor 00:28:38.572 [2024-11-17 18:49:25.046949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:38.572 [2024-11-17 18:49:25.046967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:38.572 [2024-11-17 18:49:25.046981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:38.572 [2024-11-17 18:49:25.046994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:38.572 [2024-11-17 18:49:25.047008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:38.572 [2024-11-17 18:49:25.047021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:38.572 [2024-11-17 18:49:25.047033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:38.572 [2024-11-17 18:49:25.047045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:38.572 [2024-11-17 18:49:25.047059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:38.572 [2024-11-17 18:49:25.047071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:38.572 [2024-11-17 18:49:25.047084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:38.572 [2024-11-17 18:49:25.047096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:39.140 18:49:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 816989 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 816989 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 816989 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.073 rmmod nvme_tcp 00:28:40.073 rmmod nvme_fabrics 00:28:40.073 rmmod nvme_keyring 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 816810 ']' 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 816810 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 816810 ']' 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 816810 00:28:40.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (816810) - No such process 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 816810 is not found' 00:28:40.073 Process with pid 816810 is not found 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.073 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.074 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.074 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.074 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.074 18:49:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.611 00:28:42.611 real 0m7.209s 00:28:42.611 user 0m17.035s 00:28:42.611 sys 0m1.363s 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:42.611 ************************************ 00:28:42.611 END TEST nvmf_shutdown_tc3 00:28:42.611 ************************************ 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:42.611 ************************************ 00:28:42.611 START TEST nvmf_shutdown_tc4 00:28:42.611 ************************************ 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:42.611 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:42.611 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.611 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:42.612 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:42.612 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:28:42.612 00:28:42.612 --- 10.0.0.2 ping statistics --- 00:28:42.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.612 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:28:42.612 00:28:42.612 --- 10.0.0.1 ping statistics --- 00:28:42.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.612 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=817774 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 817774 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 817774 ']' 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.612 18:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:42.612 [2024-11-17 18:49:28.890138] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:42.612 [2024-11-17 18:49:28.890219] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.612 [2024-11-17 18:49:28.967168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.612 [2024-11-17 18:49:29.014565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.612 [2024-11-17 18:49:29.014621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.612 [2024-11-17 18:49:29.014650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.612 [2024-11-17 18:49:29.014661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.612 [2024-11-17 18:49:29.014671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.612 [2024-11-17 18:49:29.016329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.612 [2024-11-17 18:49:29.016391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.612 [2024-11-17 18:49:29.016459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:42.612 [2024-11-17 18:49:29.016462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.612 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.612 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:42.612 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:42.612 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:42.612 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:42.613 [2024-11-17 18:49:29.167304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.613 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.870 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:42.870 Malloc1 00:28:42.870 [2024-11-17 18:49:29.269550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.870 Malloc2 00:28:42.870 Malloc3 00:28:42.870 Malloc4 00:28:42.870 Malloc5 00:28:43.128 Malloc6 00:28:43.128 Malloc7 00:28:43.128 Malloc8 00:28:43.128 Malloc9 00:28:43.128 Malloc10 00:28:43.386 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.386 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:43.386 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.386 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:43.386 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=817957 00:28:43.386 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:43.386 18:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:43.386 [2024-11-17 18:49:29.797025] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 817774 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 817774 ']' 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 817774 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 817774 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 817774' 00:28:48.659 killing process with pid 817774 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 817774 00:28:48.659 18:49:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 817774 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 [2024-11-17 18:49:34.792343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x962a70 is same with the state(6) to be set 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 [2024-11-17 18:49:34.792413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x962a70 is same with tstarting I/O failed: -6 00:28:48.660 he state(6) to be set 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 [2024-11-17 18:49:34.792432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x962a70 is same with the state(6) to be set 00:28:48.660 [2024-11-17 18:49:34.792445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x962a70 is same with the state(6) to be set 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 [2024-11-17 18:49:34.793003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 [2024-11-17 18:49:34.793520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6fb0 is same with the state(6) to be set 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 [2024-11-17 18:49:34.793556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6fb0 is same with the state(6) to be set 00:28:48.660 [2024-11-17 18:49:34.793572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6fb0 is same with tWrite completed with error (sct=0, sc=8) 00:28:48.660 he state(6) to be set 00:28:48.660 starting I/O failed: -6 00:28:48.660 [2024-11-17 18:49:34.793597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6fb0 is same with the state(6) to be set 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 [2024-11-17 18:49:34.793610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6fb0 is same with the state(6) to be set 00:28:48.660 starting I/O failed: -6 00:28:48.660 [2024-11-17 18:49:34.793623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6fb0 is same with the state(6) to be set 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 [2024-11-17 18:49:34.793635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6fb0 is same with the state(6) to be set 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 [2024-11-17 18:49:34.793648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6fb0 is same with the state(6) to be set 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 [2024-11-17 18:49:34.794192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 starting I/O failed: -6 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.660 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 [2024-11-17 18:49:34.794478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6610 is same with the state(6) to be set 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 [2024-11-17 18:49:34.794507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6610 is same with the state(6) to be set 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 [2024-11-17 18:49:34.794522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6610 is same with the state(6) to be set 00:28:48.661 starting I/O failed: -6 00:28:48.661 [2024-11-17 18:49:34.794536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6610 is same with the state(6) to be set 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 [2024-11-17 18:49:34.794549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6610 is same with the state(6) to be set 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 [2024-11-17 18:49:34.794561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6610 is same with the state(6) to be set 00:28:48.661 [2024-11-17 18:49:34.794573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6610 is same with tWrite completed with error (sct=0, sc=8) 00:28:48.661 he state(6) to be set 00:28:48.661 starting I/O failed: -6 00:28:48.661 [2024-11-17 18:49:34.794601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f6610 is same with tWrite completed with error (sct=0, sc=8) 00:28:48.661 he state(6) to be set 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 [2024-11-17 18:49:34.795345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.661 Write completed with error (sct=0, sc=8) 00:28:48.661 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 [2024-11-17 18:49:34.797099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.662 NVMe io qpair process completion error 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 [2024-11-17 18:49:34.804482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.662 starting I/O failed: -6 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 [2024-11-17 18:49:34.805613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.662 starting I/O failed: -6 00:28:48.662 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 [2024-11-17 18:49:34.806818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 [2024-11-17 18:49:34.807503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ec70 is same with the state(6) to be set 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 [2024-11-17 18:49:34.807542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ec70 is same with the state(6) to be set 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 [2024-11-17 18:49:34.807558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ec70 is same with tWrite completed with error (sct=0, sc=8) 00:28:48.663 he state(6) to be set 00:28:48.663 starting I/O failed: -6 00:28:48.663 [2024-11-17 18:49:34.807585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ec70 is same with the state(6) to be set 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 [2024-11-17 18:49:34.807599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ec70 is same with the state(6) to be set 00:28:48.663 starting I/O failed: -6 00:28:48.663 [2024-11-17 18:49:34.807612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ec70 is same with the state(6) to be set 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 [2024-11-17 18:49:34.807624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ec70 is same with the state(6) to be set 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 [2024-11-17 18:49:34.807637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ec70 is same with the state(6) to be set 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.663 starting I/O failed: -6 00:28:48.663 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 [2024-11-17 18:49:34.808199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f140 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f140 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f140 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f140 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f140 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f140 is same with the state(6) to be set 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 [2024-11-17 18:49:34.808641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f610 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f610 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f610 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f610 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f610 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f610 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f610 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f610 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f610 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f610 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.808960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.664 NVMe io qpair process completion error 00:28:48.664 [2024-11-17 18:49:34.809173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e7a0 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.809206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e7a0 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.809225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e7a0 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.809239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e7a0 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.809251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e7a0 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.809263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e7a0 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.809275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e7a0 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.809287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e7a0 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.809299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e7a0 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.809311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e7a0 is same with the state(6) to be set 00:28:48.664 [2024-11-17 18:49:34.809323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e7a0 is same with the state(6) to be set 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 [2024-11-17 18:49:34.810139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.664 starting I/O failed: -6 00:28:48.664 starting I/O failed: -6 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.664 starting I/O failed: -6 00:28:48.664 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 [2024-11-17 18:49:34.811320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 [2024-11-17 18:49:34.812502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.665 Write completed with error (sct=0, sc=8) 00:28:48.665 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 [2024-11-17 18:49:34.814183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.666 NVMe io qpair process completion error 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 [2024-11-17 18:49:34.815389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.666 starting I/O failed: -6 00:28:48.666 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 [2024-11-17 18:49:34.816456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 [2024-11-17 18:49:34.817602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.667 Write completed with error (sct=0, sc=8) 00:28:48.667 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 [2024-11-17 18:49:34.819346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:48.668 NVMe io qpair process completion error 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 [2024-11-17 18:49:34.820507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.668 starting I/O failed: -6 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 Write completed with error (sct=0, sc=8) 00:28:48.668 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 [2024-11-17 18:49:34.821510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 [2024-11-17 18:49:34.822693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.669 Write completed with error (sct=0, sc=8) 00:28:48.669 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 [2024-11-17 18:49:34.824903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:48.670 NVMe io qpair process completion error 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 [2024-11-17 18:49:34.826177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.670 starting I/O failed: -6 00:28:48.670 Write completed with error (sct=0, sc=8) 00:28:48.671 [2024-11-17 18:49:34.827252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 [2024-11-17 18:49:34.828444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.671 starting I/O failed: -6 00:28:48.671 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 [2024-11-17 18:49:34.831958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:48.672 NVMe io qpair process completion error 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 [2024-11-17 18:49:34.833294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 [2024-11-17 18:49:34.834334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.672 Write completed with error (sct=0, sc=8) 00:28:48.672 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 [2024-11-17 18:49:34.835573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.673 starting I/O failed: -6 00:28:48.673 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 [2024-11-17 18:49:34.838725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:48.674 NVMe io qpair process completion error 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.674 starting I/O failed: -6 00:28:48.674 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 [2024-11-17 18:49:34.842385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 [2024-11-17 18:49:34.845087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.675 NVMe io qpair process completion error 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 starting I/O failed: -6 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.675 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 [2024-11-17 18:49:34.846466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 [2024-11-17 18:49:34.847519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.676 Write completed with error (sct=0, sc=8) 00:28:48.676 starting I/O failed: -6 00:28:48.677 [2024-11-17 18:49:34.848684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 [2024-11-17 18:49:34.850861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.677 NVMe io qpair process completion error 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 starting I/O failed: -6 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 Write completed with error (sct=0, sc=8) 00:28:48.677 [2024-11-17 18:49:34.852017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.677 starting I/O failed: -6 00:28:48.677 starting I/O failed: -6 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 [2024-11-17 18:49:34.853129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 [2024-11-17 18:49:34.854351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.678 starting I/O failed: -6 00:28:48.678 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 Write completed with error (sct=0, sc=8) 00:28:48.679 starting I/O failed: -6 00:28:48.679 [2024-11-17 18:49:34.856805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.679 NVMe io qpair process completion error 00:28:48.679 Initializing NVMe Controllers 00:28:48.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:48.679 Controller IO queue size 128, less than required. 00:28:48.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:48.679 Controller IO queue size 128, less than required. 00:28:48.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:48.679 Controller IO queue size 128, less than required. 00:28:48.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:48.679 Controller IO queue size 128, less than required. 00:28:48.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:48.679 Controller IO queue size 128, less than required. 00:28:48.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:48.679 Controller IO queue size 128, less than required. 00:28:48.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.679 Controller IO queue size 128, less than required. 00:28:48.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:48.679 Controller IO queue size 128, less than required. 00:28:48.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:48.679 Controller IO queue size 128, less than required. 00:28:48.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:48.679 Controller IO queue size 128, less than required. 00:28:48.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:48.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:48.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:48.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:48.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:48.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:48.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:48.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:48.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:48.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:48.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:48.679 Initialization complete. Launching workers. 00:28:48.679 ======================================================== 00:28:48.679 Latency(us) 00:28:48.679 Device Information : IOPS MiB/s Average min max 00:28:48.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1799.59 77.33 71149.75 818.47 126421.11 00:28:48.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1824.60 78.40 70197.26 1119.22 156257.09 00:28:48.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1800.46 77.36 71163.47 792.86 124081.88 00:28:48.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1840.91 79.10 69630.44 833.99 130732.89 00:28:48.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1839.39 79.04 69741.20 793.93 122896.38 00:28:48.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1815.90 78.03 70688.23 929.50 122636.84 00:28:48.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1740.01 74.77 72967.05 936.49 121354.60 00:28:48.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1773.71 76.21 72358.11 935.65 139013.57 00:28:48.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1754.36 75.38 73189.22 847.28 141535.02 00:28:48.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1771.54 76.12 71678.58 824.34 120888.64 00:28:48.680 ======================================================== 00:28:48.680 Total : 17960.49 771.74 71254.49 792.86 156257.09 00:28:48.680 00:28:48.680 [2024-11-17 18:49:34.863112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1077370 is same with the state(6) to be set 00:28:48.680 [2024-11-17 18:49:34.863206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10776a0 is same with the state(6) to be set 00:28:48.680 [2024-11-17 18:49:34.863266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1078e10 is same with the state(6) to be set 00:28:48.680 [2024-11-17 18:49:34.863324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10797a0 is same with the state(6) to be set 00:28:48.680 [2024-11-17 18:49:34.863379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10779d0 is same with the state(6) to be set 00:28:48.680 [2024-11-17 18:49:34.863436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1076fb0 is same with the state(6) to be set 00:28:48.680 [2024-11-17 18:49:34.863493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107cb30 is same with the state(6) to be set 00:28:48.680 [2024-11-17 18:49:34.863550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079470 is same with the state(6) to be set 00:28:48.680 [2024-11-17 18:49:34.863609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079140 is same with the state(6) to be set 00:28:48.680 [2024-11-17 18:49:34.863666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1077190 is same with the state(6) to be set 00:28:48.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:48.938 18:49:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 817957 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 817957 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 817957 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:49.874 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.875 rmmod nvme_tcp 00:28:49.875 rmmod nvme_fabrics 00:28:49.875 rmmod nvme_keyring 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 817774 ']' 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 817774 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 817774 ']' 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 817774 00:28:49.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (817774) - No such process 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 817774 is not found' 00:28:49.875 Process with pid 817774 is not found 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.875 18:49:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.416 00:28:52.416 real 0m9.784s 00:28:52.416 user 0m23.973s 00:28:52.416 sys 0m5.653s 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:52.416 ************************************ 00:28:52.416 END TEST nvmf_shutdown_tc4 00:28:52.416 ************************************ 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:52.416 00:28:52.416 real 0m37.164s 00:28:52.416 user 1m39.807s 00:28:52.416 sys 0m11.986s 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:52.416 ************************************ 00:28:52.416 END TEST nvmf_shutdown 00:28:52.416 ************************************ 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:52.416 ************************************ 00:28:52.416 START TEST nvmf_nsid 00:28:52.416 ************************************ 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:52.416 * Looking for test storage... 00:28:52.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:52.416 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:52.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.417 --rc genhtml_branch_coverage=1 00:28:52.417 --rc genhtml_function_coverage=1 00:28:52.417 --rc genhtml_legend=1 00:28:52.417 --rc geninfo_all_blocks=1 00:28:52.417 --rc geninfo_unexecuted_blocks=1 00:28:52.417 00:28:52.417 ' 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:52.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.417 --rc genhtml_branch_coverage=1 00:28:52.417 --rc genhtml_function_coverage=1 00:28:52.417 --rc genhtml_legend=1 00:28:52.417 --rc geninfo_all_blocks=1 00:28:52.417 --rc geninfo_unexecuted_blocks=1 00:28:52.417 00:28:52.417 ' 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:52.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.417 --rc genhtml_branch_coverage=1 00:28:52.417 --rc genhtml_function_coverage=1 00:28:52.417 --rc genhtml_legend=1 00:28:52.417 --rc geninfo_all_blocks=1 00:28:52.417 --rc geninfo_unexecuted_blocks=1 00:28:52.417 00:28:52.417 ' 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:52.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.417 --rc genhtml_branch_coverage=1 00:28:52.417 --rc genhtml_function_coverage=1 00:28:52.417 --rc genhtml_legend=1 00:28:52.417 --rc geninfo_all_blocks=1 00:28:52.417 --rc geninfo_unexecuted_blocks=1 00:28:52.417 00:28:52.417 ' 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:52.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.417 18:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.948 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:54.949 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:54.949 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:54.949 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:54.949 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:54.949 18:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:54.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:28:54.949 00:28:54.949 --- 10.0.0.2 ping statistics --- 00:28:54.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.949 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:28:54.949 00:28:54.949 --- 10.0.0.1 ping statistics --- 00:28:54.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.949 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=820683 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 820683 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 820683 ']' 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.949 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:54.949 [2024-11-17 18:49:41.108816] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:54.949 [2024-11-17 18:49:41.108900] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.949 [2024-11-17 18:49:41.181010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.950 [2024-11-17 18:49:41.226843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.950 [2024-11-17 18:49:41.226915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.950 [2024-11-17 18:49:41.226929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.950 [2024-11-17 18:49:41.226940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.950 [2024-11-17 18:49:41.226950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.950 [2024-11-17 18:49:41.227538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=820714 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=4397c5d1-9d5a-479f-8ca6-6dd145f069c6 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=ea5e6e96-59ad-48be-85ea-7047cddf3209 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=7ffa490a-b124-4e08-8a41-54d9f6d1f2ac 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:54.950 null0 00:28:54.950 null1 00:28:54.950 null2 00:28:54.950 [2024-11-17 18:49:41.401096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.950 [2024-11-17 18:49:41.416011] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:28:54.950 [2024-11-17 18:49:41.416107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid820714 ] 00:28:54.950 [2024-11-17 18:49:41.425281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 820714 /var/tmp/tgt2.sock 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 820714 ']' 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:54.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.950 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:54.950 [2024-11-17 18:49:41.485054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.209 [2024-11-17 18:49:41.538517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.466 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.466 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:55.466 18:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:55.724 [2024-11-17 18:49:42.168928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.724 [2024-11-17 18:49:42.185124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:28:55.724 nvme0n1 nvme0n2 00:28:55.724 nvme1n1 00:28:55.724 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:55.725 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:55.725 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:28:56.291 18:49:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 4397c5d1-9d5a-479f-8ca6-6dd145f069c6 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4397c5d19d5a479f8ca66dd145f069c6 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4397C5D19D5A479F8CA66DD145F069C6 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 4397C5D19D5A479F8CA66DD145F069C6 == \4\3\9\7\C\5\D\1\9\D\5\A\4\7\9\F\8\C\A\6\6\D\D\1\4\5\F\0\6\9\C\6 ]] 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid ea5e6e96-59ad-48be-85ea-7047cddf3209 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ea5e6e9659ad48be85ea7047cddf3209 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EA5E6E9659AD48BE85EA7047CDDF3209 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ EA5E6E9659AD48BE85EA7047CDDF3209 == \E\A\5\E\6\E\9\6\5\9\A\D\4\8\B\E\8\5\E\A\7\0\4\7\C\D\D\F\3\2\0\9 ]] 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 7ffa490a-b124-4e08-8a41-54d9f6d1f2ac 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7ffa490ab1244e088a4154d9f6d1f2ac 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7FFA490AB1244E088A4154D9F6D1F2AC 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 7FFA490AB1244E088A4154D9F6D1F2AC == \7\F\F\A\4\9\0\A\B\1\2\4\4\E\0\8\8\A\4\1\5\4\D\9\F\6\D\1\F\2\A\C ]] 00:28:57.664 18:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 820714 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 820714 ']' 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 820714 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 820714 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 820714' 00:28:57.664 killing process with pid 820714 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 820714 00:28:57.664 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 820714 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.229 rmmod nvme_tcp 00:28:58.229 rmmod nvme_fabrics 00:28:58.229 rmmod nvme_keyring 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 820683 ']' 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 820683 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 820683 ']' 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 820683 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 820683 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 820683' 00:28:58.229 killing process with pid 820683 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 820683 00:28:58.229 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 820683 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.487 18:49:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.396 18:49:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.396 00:29:00.396 real 0m8.396s 00:29:00.396 user 0m8.139s 00:29:00.396 sys 0m2.721s 00:29:00.396 18:49:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.396 18:49:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:00.396 ************************************ 00:29:00.396 END TEST nvmf_nsid 00:29:00.396 ************************************ 00:29:00.396 18:49:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:00.396 00:29:00.396 real 18m1.473s 00:29:00.396 user 49m58.164s 00:29:00.396 sys 4m0.519s 00:29:00.396 18:49:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.396 18:49:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:00.396 ************************************ 00:29:00.396 END TEST nvmf_target_extra 00:29:00.396 ************************************ 00:29:00.396 18:49:46 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:00.396 18:49:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:00.396 18:49:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.396 18:49:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:00.656 ************************************ 00:29:00.656 START TEST nvmf_host 00:29:00.656 ************************************ 00:29:00.656 18:49:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:00.656 * Looking for test storage... 00:29:00.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:00.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.656 --rc genhtml_branch_coverage=1 00:29:00.656 --rc genhtml_function_coverage=1 00:29:00.656 --rc genhtml_legend=1 00:29:00.656 --rc geninfo_all_blocks=1 00:29:00.656 --rc geninfo_unexecuted_blocks=1 00:29:00.656 00:29:00.656 ' 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:00.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.656 --rc genhtml_branch_coverage=1 00:29:00.656 --rc genhtml_function_coverage=1 00:29:00.656 --rc genhtml_legend=1 00:29:00.656 --rc geninfo_all_blocks=1 00:29:00.656 --rc geninfo_unexecuted_blocks=1 00:29:00.656 00:29:00.656 ' 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:00.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.656 --rc genhtml_branch_coverage=1 00:29:00.656 --rc genhtml_function_coverage=1 00:29:00.656 --rc genhtml_legend=1 00:29:00.656 --rc geninfo_all_blocks=1 00:29:00.656 --rc geninfo_unexecuted_blocks=1 00:29:00.656 00:29:00.656 ' 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:00.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.656 --rc genhtml_branch_coverage=1 00:29:00.656 --rc genhtml_function_coverage=1 00:29:00.656 --rc genhtml_legend=1 00:29:00.656 --rc geninfo_all_blocks=1 00:29:00.656 --rc geninfo_unexecuted_blocks=1 00:29:00.656 00:29:00.656 ' 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.656 18:49:47 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:00.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.657 ************************************ 00:29:00.657 START TEST nvmf_multicontroller 00:29:00.657 ************************************ 00:29:00.657 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:00.917 * Looking for test storage... 00:29:00.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:00.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.917 --rc genhtml_branch_coverage=1 00:29:00.917 --rc genhtml_function_coverage=1 00:29:00.917 --rc genhtml_legend=1 00:29:00.917 --rc geninfo_all_blocks=1 00:29:00.917 --rc geninfo_unexecuted_blocks=1 00:29:00.917 00:29:00.917 ' 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:00.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.917 --rc genhtml_branch_coverage=1 00:29:00.917 --rc genhtml_function_coverage=1 00:29:00.917 --rc genhtml_legend=1 00:29:00.917 --rc geninfo_all_blocks=1 00:29:00.917 --rc geninfo_unexecuted_blocks=1 00:29:00.917 00:29:00.917 ' 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:00.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.917 --rc genhtml_branch_coverage=1 00:29:00.917 --rc genhtml_function_coverage=1 00:29:00.917 --rc genhtml_legend=1 00:29:00.917 --rc geninfo_all_blocks=1 00:29:00.917 --rc geninfo_unexecuted_blocks=1 00:29:00.917 00:29:00.917 ' 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:00.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.917 --rc genhtml_branch_coverage=1 00:29:00.917 --rc genhtml_function_coverage=1 00:29:00.917 --rc genhtml_legend=1 00:29:00.917 --rc geninfo_all_blocks=1 00:29:00.917 --rc geninfo_unexecuted_blocks=1 00:29:00.917 00:29:00.917 ' 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.917 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:00.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.918 18:49:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:03.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:03.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:03.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:03.456 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:03.456 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:03.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:03.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:29:03.456 00:29:03.456 --- 10.0.0.2 ping statistics --- 00:29:03.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.457 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:03.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:03.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:29:03.457 00:29:03.457 --- 10.0.0.1 ping statistics --- 00:29:03.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.457 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=823269 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 823269 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 823269 ']' 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.457 18:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.457 [2024-11-17 18:49:49.775925] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:03.457 [2024-11-17 18:49:49.776010] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.457 [2024-11-17 18:49:49.848174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:03.457 [2024-11-17 18:49:49.894415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.457 [2024-11-17 18:49:49.894483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.457 [2024-11-17 18:49:49.894496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.457 [2024-11-17 18:49:49.894506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.457 [2024-11-17 18:49:49.894515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.457 [2024-11-17 18:49:49.895911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.457 [2024-11-17 18:49:49.895990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.457 [2024-11-17 18:49:49.895975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.457 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.457 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:03.457 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:03.457 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:03.457 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.457 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.457 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:03.457 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.457 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.716 [2024-11-17 18:49:50.033074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.716 Malloc0 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.716 [2024-11-17 18:49:50.097764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.716 [2024-11-17 18:49:50.105639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.716 Malloc1 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=823290 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 823290 /var/tmp/bdevperf.sock 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 823290 ']' 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:03.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.716 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.975 NVMe0n1 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.975 1 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:03.975 request: 00:29:03.975 { 00:29:03.975 "name": "NVMe0", 00:29:03.975 "trtype": "tcp", 00:29:03.975 "traddr": "10.0.0.2", 00:29:03.975 "adrfam": "ipv4", 00:29:03.975 "trsvcid": "4420", 00:29:03.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.975 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:03.975 "hostaddr": "10.0.0.1", 00:29:03.975 "prchk_reftag": false, 00:29:03.975 "prchk_guard": false, 00:29:03.975 "hdgst": false, 00:29:03.975 "ddgst": false, 00:29:03.975 "allow_unrecognized_csi": false, 00:29:03.975 "method": "bdev_nvme_attach_controller", 00:29:03.975 "req_id": 1 00:29:03.975 } 00:29:03.975 Got JSON-RPC error response 00:29:03.975 response: 00:29:03.975 { 00:29:03.975 "code": -114, 00:29:03.975 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:03.975 } 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:03.975 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.234 request: 00:29:04.234 { 00:29:04.234 "name": "NVMe0", 00:29:04.234 "trtype": "tcp", 00:29:04.234 "traddr": "10.0.0.2", 00:29:04.234 "adrfam": "ipv4", 00:29:04.234 "trsvcid": "4420", 00:29:04.234 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:04.234 "hostaddr": "10.0.0.1", 00:29:04.234 "prchk_reftag": false, 00:29:04.234 "prchk_guard": false, 00:29:04.234 "hdgst": false, 00:29:04.234 "ddgst": false, 00:29:04.234 "allow_unrecognized_csi": false, 00:29:04.234 "method": "bdev_nvme_attach_controller", 00:29:04.234 "req_id": 1 00:29:04.234 } 00:29:04.234 Got JSON-RPC error response 00:29:04.234 response: 00:29:04.234 { 00:29:04.234 "code": -114, 00:29:04.234 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:04.234 } 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.234 request: 00:29:04.234 { 00:29:04.234 "name": "NVMe0", 00:29:04.234 "trtype": "tcp", 00:29:04.234 "traddr": "10.0.0.2", 00:29:04.234 "adrfam": "ipv4", 00:29:04.234 "trsvcid": "4420", 00:29:04.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.234 "hostaddr": "10.0.0.1", 00:29:04.234 "prchk_reftag": false, 00:29:04.234 "prchk_guard": false, 00:29:04.234 "hdgst": false, 00:29:04.234 "ddgst": false, 00:29:04.234 "multipath": "disable", 00:29:04.234 "allow_unrecognized_csi": false, 00:29:04.234 "method": "bdev_nvme_attach_controller", 00:29:04.234 "req_id": 1 00:29:04.234 } 00:29:04.234 Got JSON-RPC error response 00:29:04.234 response: 00:29:04.234 { 00:29:04.234 "code": -114, 00:29:04.234 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:04.234 } 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.234 request: 00:29:04.234 { 00:29:04.234 "name": "NVMe0", 00:29:04.234 "trtype": "tcp", 00:29:04.234 "traddr": "10.0.0.2", 00:29:04.234 "adrfam": "ipv4", 00:29:04.234 "trsvcid": "4420", 00:29:04.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.234 "hostaddr": "10.0.0.1", 00:29:04.234 "prchk_reftag": false, 00:29:04.234 "prchk_guard": false, 00:29:04.234 "hdgst": false, 00:29:04.234 "ddgst": false, 00:29:04.234 "multipath": "failover", 00:29:04.234 "allow_unrecognized_csi": false, 00:29:04.234 "method": "bdev_nvme_attach_controller", 00:29:04.234 "req_id": 1 00:29:04.234 } 00:29:04.234 Got JSON-RPC error response 00:29:04.234 response: 00:29:04.234 { 00:29:04.234 "code": -114, 00:29:04.234 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:04.234 } 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.234 NVMe0n1 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.234 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:04.234 18:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:05.608 { 00:29:05.608 "results": [ 00:29:05.608 { 00:29:05.608 "job": "NVMe0n1", 00:29:05.608 "core_mask": "0x1", 00:29:05.608 "workload": "write", 00:29:05.608 "status": "finished", 00:29:05.608 "queue_depth": 128, 00:29:05.608 "io_size": 4096, 00:29:05.608 "runtime": 1.003126, 00:29:05.608 "iops": 18424.405309004054, 00:29:05.608 "mibps": 71.97033323829709, 00:29:05.608 "io_failed": 0, 00:29:05.608 "io_timeout": 0, 00:29:05.608 "avg_latency_us": 6935.924196114738, 00:29:05.608 "min_latency_us": 4174.885925925926, 00:29:05.608 "max_latency_us": 13010.10962962963 00:29:05.608 } 00:29:05.608 ], 00:29:05.608 "core_count": 1 00:29:05.608 } 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 823290 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 823290 ']' 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 823290 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 823290 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 823290' 00:29:05.609 killing process with pid 823290 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 823290 00:29:05.609 18:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 823290 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:05.609 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:05.609 [2024-11-17 18:49:50.206910] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:05.609 [2024-11-17 18:49:50.207011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823290 ] 00:29:05.609 [2024-11-17 18:49:50.277882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.609 [2024-11-17 18:49:50.325876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.609 [2024-11-17 18:49:50.735211] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name ab2ebcff-7270-4f93-bb28-8c43311d11d5 already exists 00:29:05.609 [2024-11-17 18:49:50.735262] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:ab2ebcff-7270-4f93-bb28-8c43311d11d5 alias for bdev NVMe1n1 00:29:05.609 [2024-11-17 18:49:50.735292] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:05.609 Running I/O for 1 seconds... 00:29:05.609 18354.00 IOPS, 71.70 MiB/s 00:29:05.609 Latency(us) 00:29:05.609 [2024-11-17T17:49:52.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.609 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:05.609 NVMe0n1 : 1.00 18424.41 71.97 0.00 0.00 6935.92 4174.89 13010.11 00:29:05.609 [2024-11-17T17:49:52.185Z] =================================================================================================================== 00:29:05.609 [2024-11-17T17:49:52.185Z] Total : 18424.41 71.97 0.00 0.00 6935.92 4174.89 13010.11 00:29:05.609 Received shutdown signal, test time was about 1.000000 seconds 00:29:05.609 00:29:05.609 Latency(us) 00:29:05.609 [2024-11-17T17:49:52.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.609 [2024-11-17T17:49:52.185Z] =================================================================================================================== 00:29:05.609 [2024-11-17T17:49:52.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.609 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.609 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.609 rmmod nvme_tcp 00:29:05.609 rmmod nvme_fabrics 00:29:05.609 rmmod nvme_keyring 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 823269 ']' 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 823269 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 823269 ']' 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 823269 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 823269 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 823269' 00:29:05.867 killing process with pid 823269 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 823269 00:29:05.867 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 823269 00:29:06.126 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:06.126 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:06.126 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:06.126 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:06.126 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:06.126 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:06.126 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:06.126 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:06.126 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:06.127 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.127 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.127 18:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.027 18:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:08.027 00:29:08.027 real 0m7.354s 00:29:08.027 user 0m10.658s 00:29:08.027 sys 0m2.358s 00:29:08.027 18:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:08.027 18:49:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:08.027 ************************************ 00:29:08.027 END TEST nvmf_multicontroller 00:29:08.027 ************************************ 00:29:08.027 18:49:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:08.027 18:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:08.027 18:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:08.027 18:49:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.027 ************************************ 00:29:08.027 START TEST nvmf_aer 00:29:08.027 ************************************ 00:29:08.027 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:08.288 * Looking for test storage... 00:29:08.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:08.288 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:08.288 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:08.288 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:08.288 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:08.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.289 --rc genhtml_branch_coverage=1 00:29:08.289 --rc genhtml_function_coverage=1 00:29:08.289 --rc genhtml_legend=1 00:29:08.289 --rc geninfo_all_blocks=1 00:29:08.289 --rc geninfo_unexecuted_blocks=1 00:29:08.289 00:29:08.289 ' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:08.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.289 --rc genhtml_branch_coverage=1 00:29:08.289 --rc genhtml_function_coverage=1 00:29:08.289 --rc genhtml_legend=1 00:29:08.289 --rc geninfo_all_blocks=1 00:29:08.289 --rc geninfo_unexecuted_blocks=1 00:29:08.289 00:29:08.289 ' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:08.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.289 --rc genhtml_branch_coverage=1 00:29:08.289 --rc genhtml_function_coverage=1 00:29:08.289 --rc genhtml_legend=1 00:29:08.289 --rc geninfo_all_blocks=1 00:29:08.289 --rc geninfo_unexecuted_blocks=1 00:29:08.289 00:29:08.289 ' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:08.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.289 --rc genhtml_branch_coverage=1 00:29:08.289 --rc genhtml_function_coverage=1 00:29:08.289 --rc genhtml_legend=1 00:29:08.289 --rc geninfo_all_blocks=1 00:29:08.289 --rc geninfo_unexecuted_blocks=1 00:29:08.289 00:29:08.289 ' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:08.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:08.289 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:08.290 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:08.290 18:49:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:10.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:10.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:10.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:10.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.888 18:49:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:10.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:29:10.888 00:29:10.888 --- 10.0.0.2 ping statistics --- 00:29:10.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.888 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:29:10.888 00:29:10.888 --- 10.0.0.1 ping statistics --- 00:29:10.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.888 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=825524 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 825524 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 825524 ']' 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.888 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:10.888 [2024-11-17 18:49:57.169309] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:10.888 [2024-11-17 18:49:57.169390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.888 [2024-11-17 18:49:57.245918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.888 [2024-11-17 18:49:57.293743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.888 [2024-11-17 18:49:57.293797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.888 [2024-11-17 18:49:57.293825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.888 [2024-11-17 18:49:57.293836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.888 [2024-11-17 18:49:57.293846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.888 [2024-11-17 18:49:57.295347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.888 [2024-11-17 18:49:57.295427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.888 [2024-11-17 18:49:57.295448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.888 [2024-11-17 18:49:57.295453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:10.889 [2024-11-17 18:49:57.439469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.889 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.146 Malloc0 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.146 [2024-11-17 18:49:57.500552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.146 [ 00:29:11.146 { 00:29:11.146 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:11.146 "subtype": "Discovery", 00:29:11.146 "listen_addresses": [], 00:29:11.146 "allow_any_host": true, 00:29:11.146 "hosts": [] 00:29:11.146 }, 00:29:11.146 { 00:29:11.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.146 "subtype": "NVMe", 00:29:11.146 "listen_addresses": [ 00:29:11.146 { 00:29:11.146 "trtype": "TCP", 00:29:11.146 "adrfam": "IPv4", 00:29:11.146 "traddr": "10.0.0.2", 00:29:11.146 "trsvcid": "4420" 00:29:11.146 } 00:29:11.146 ], 00:29:11.146 "allow_any_host": true, 00:29:11.146 "hosts": [], 00:29:11.146 "serial_number": "SPDK00000000000001", 00:29:11.146 "model_number": "SPDK bdev Controller", 00:29:11.146 "max_namespaces": 2, 00:29:11.146 "min_cntlid": 1, 00:29:11.146 "max_cntlid": 65519, 00:29:11.146 "namespaces": [ 00:29:11.146 { 00:29:11.146 "nsid": 1, 00:29:11.146 "bdev_name": "Malloc0", 00:29:11.146 "name": "Malloc0", 00:29:11.146 "nguid": "CC88A407502B47889A96DAAB6A19D1C8", 00:29:11.146 "uuid": "cc88a407-502b-4788-9a96-daab6a19d1c8" 00:29:11.146 } 00:29:11.146 ] 00:29:11.146 } 00:29:11.146 ] 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=825554 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:11.146 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:11.147 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.147 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:11.147 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:11.147 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:11.147 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.147 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:11.147 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:11.147 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.405 Malloc1 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.405 Asynchronous Event Request test 00:29:11.405 Attaching to 10.0.0.2 00:29:11.405 Attached to 10.0.0.2 00:29:11.405 Registering asynchronous event callbacks... 00:29:11.405 Starting namespace attribute notice tests for all controllers... 00:29:11.405 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:11.405 aer_cb - Changed Namespace 00:29:11.405 Cleaning up... 00:29:11.405 [ 00:29:11.405 { 00:29:11.405 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:11.405 "subtype": "Discovery", 00:29:11.405 "listen_addresses": [], 00:29:11.405 "allow_any_host": true, 00:29:11.405 "hosts": [] 00:29:11.405 }, 00:29:11.405 { 00:29:11.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.405 "subtype": "NVMe", 00:29:11.405 "listen_addresses": [ 00:29:11.405 { 00:29:11.405 "trtype": "TCP", 00:29:11.405 "adrfam": "IPv4", 00:29:11.405 "traddr": "10.0.0.2", 00:29:11.405 "trsvcid": "4420" 00:29:11.405 } 00:29:11.405 ], 00:29:11.405 "allow_any_host": true, 00:29:11.405 "hosts": [], 00:29:11.405 "serial_number": "SPDK00000000000001", 00:29:11.405 "model_number": "SPDK bdev Controller", 00:29:11.405 "max_namespaces": 2, 00:29:11.405 "min_cntlid": 1, 00:29:11.405 "max_cntlid": 65519, 00:29:11.405 "namespaces": [ 00:29:11.405 { 00:29:11.405 "nsid": 1, 00:29:11.405 "bdev_name": "Malloc0", 00:29:11.405 "name": "Malloc0", 00:29:11.405 "nguid": "CC88A407502B47889A96DAAB6A19D1C8", 00:29:11.405 "uuid": "cc88a407-502b-4788-9a96-daab6a19d1c8" 00:29:11.405 }, 00:29:11.405 { 00:29:11.405 "nsid": 2, 00:29:11.405 "bdev_name": "Malloc1", 00:29:11.405 "name": "Malloc1", 00:29:11.405 "nguid": "C74B05C4DE4944C2ADA7B3A562C14E27", 00:29:11.405 "uuid": "c74b05c4-de49-44c2-ada7-b3a562c14e27" 00:29:11.405 } 00:29:11.405 ] 00:29:11.405 } 00:29:11.405 ] 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 825554 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:11.405 rmmod nvme_tcp 00:29:11.405 rmmod nvme_fabrics 00:29:11.405 rmmod nvme_keyring 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 825524 ']' 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 825524 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 825524 ']' 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 825524 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 825524 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 825524' 00:29:11.405 killing process with pid 825524 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 825524 00:29:11.405 18:49:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 825524 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.665 18:49:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.202 00:29:14.202 real 0m5.611s 00:29:14.202 user 0m4.321s 00:29:14.202 sys 0m2.060s 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:14.202 ************************************ 00:29:14.202 END TEST nvmf_aer 00:29:14.202 ************************************ 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.202 ************************************ 00:29:14.202 START TEST nvmf_async_init 00:29:14.202 ************************************ 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:14.202 * Looking for test storage... 00:29:14.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:14.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.202 --rc genhtml_branch_coverage=1 00:29:14.202 --rc genhtml_function_coverage=1 00:29:14.202 --rc genhtml_legend=1 00:29:14.202 --rc geninfo_all_blocks=1 00:29:14.202 --rc geninfo_unexecuted_blocks=1 00:29:14.202 00:29:14.202 ' 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:14.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.202 --rc genhtml_branch_coverage=1 00:29:14.202 --rc genhtml_function_coverage=1 00:29:14.202 --rc genhtml_legend=1 00:29:14.202 --rc geninfo_all_blocks=1 00:29:14.202 --rc geninfo_unexecuted_blocks=1 00:29:14.202 00:29:14.202 ' 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:14.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.202 --rc genhtml_branch_coverage=1 00:29:14.202 --rc genhtml_function_coverage=1 00:29:14.202 --rc genhtml_legend=1 00:29:14.202 --rc geninfo_all_blocks=1 00:29:14.202 --rc geninfo_unexecuted_blocks=1 00:29:14.202 00:29:14.202 ' 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:14.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.202 --rc genhtml_branch_coverage=1 00:29:14.202 --rc genhtml_function_coverage=1 00:29:14.202 --rc genhtml_legend=1 00:29:14.202 --rc geninfo_all_blocks=1 00:29:14.202 --rc geninfo_unexecuted_blocks=1 00:29:14.202 00:29:14.202 ' 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.202 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:14.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=21a3cd7a5f2a44cc94defafd4a4f55d3 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.203 18:50:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.104 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.104 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:16.104 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:16.104 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:16.104 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:16.104 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:16.105 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:16.105 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:16.105 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:16.105 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:16.105 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:16.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:29:16.364 00:29:16.364 --- 10.0.0.2 ping statistics --- 00:29:16.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.364 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:29:16.364 00:29:16.364 --- 10.0.0.1 ping statistics --- 00:29:16.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.364 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=827610 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 827610 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 827610 ']' 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.364 18:50:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.364 [2024-11-17 18:50:02.804805] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:16.364 [2024-11-17 18:50:02.804899] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.364 [2024-11-17 18:50:02.878588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.364 [2024-11-17 18:50:02.926402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.364 [2024-11-17 18:50:02.926458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.364 [2024-11-17 18:50:02.926487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.364 [2024-11-17 18:50:02.926498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.364 [2024-11-17 18:50:02.926508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.364 [2024-11-17 18:50:02.927125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.623 [2024-11-17 18:50:03.058921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.623 null0 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 21a3cd7a5f2a44cc94defafd4a4f55d3 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.623 [2024-11-17 18:50:03.099186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.623 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.881 nvme0n1 00:29:16.881 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.881 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:16.881 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.881 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.881 [ 00:29:16.881 { 00:29:16.881 "name": "nvme0n1", 00:29:16.881 "aliases": [ 00:29:16.881 "21a3cd7a-5f2a-44cc-94de-fafd4a4f55d3" 00:29:16.881 ], 00:29:16.881 "product_name": "NVMe disk", 00:29:16.881 "block_size": 512, 00:29:16.881 "num_blocks": 2097152, 00:29:16.881 "uuid": "21a3cd7a-5f2a-44cc-94de-fafd4a4f55d3", 00:29:16.881 "numa_id": 0, 00:29:16.881 "assigned_rate_limits": { 00:29:16.881 "rw_ios_per_sec": 0, 00:29:16.881 "rw_mbytes_per_sec": 0, 00:29:16.881 "r_mbytes_per_sec": 0, 00:29:16.881 "w_mbytes_per_sec": 0 00:29:16.881 }, 00:29:16.881 "claimed": false, 00:29:16.881 "zoned": false, 00:29:16.881 "supported_io_types": { 00:29:16.881 "read": true, 00:29:16.881 "write": true, 00:29:16.881 "unmap": false, 00:29:16.881 "flush": true, 00:29:16.881 "reset": true, 00:29:16.881 "nvme_admin": true, 00:29:16.881 "nvme_io": true, 00:29:16.881 "nvme_io_md": false, 00:29:16.881 "write_zeroes": true, 00:29:16.881 "zcopy": false, 00:29:16.881 "get_zone_info": false, 00:29:16.881 "zone_management": false, 00:29:16.881 "zone_append": false, 00:29:16.881 "compare": true, 00:29:16.881 "compare_and_write": true, 00:29:16.881 "abort": true, 00:29:16.881 "seek_hole": false, 00:29:16.881 "seek_data": false, 00:29:16.881 "copy": true, 00:29:16.881 "nvme_iov_md": false 00:29:16.881 }, 00:29:16.881 "memory_domains": [ 00:29:16.881 { 00:29:16.881 "dma_device_id": "system", 00:29:16.881 "dma_device_type": 1 00:29:16.881 } 00:29:16.881 ], 00:29:16.881 "driver_specific": { 00:29:16.881 "nvme": [ 00:29:16.881 { 00:29:16.881 "trid": { 00:29:16.881 "trtype": "TCP", 00:29:16.881 "adrfam": "IPv4", 00:29:16.881 "traddr": "10.0.0.2", 00:29:16.881 "trsvcid": "4420", 00:29:16.881 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:16.881 }, 00:29:16.881 "ctrlr_data": { 00:29:16.881 "cntlid": 1, 00:29:16.881 "vendor_id": "0x8086", 00:29:16.881 "model_number": "SPDK bdev Controller", 00:29:16.881 "serial_number": "00000000000000000000", 00:29:16.881 "firmware_revision": "25.01", 00:29:16.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.881 "oacs": { 00:29:16.881 "security": 0, 00:29:16.881 "format": 0, 00:29:16.881 "firmware": 0, 00:29:16.881 "ns_manage": 0 00:29:16.881 }, 00:29:16.881 "multi_ctrlr": true, 00:29:16.881 "ana_reporting": false 00:29:16.881 }, 00:29:16.881 "vs": { 00:29:16.881 "nvme_version": "1.3" 00:29:16.881 }, 00:29:16.881 "ns_data": { 00:29:16.881 "id": 1, 00:29:16.881 "can_share": true 00:29:16.881 } 00:29:16.881 } 00:29:16.881 ], 00:29:16.881 "mp_policy": "active_passive" 00:29:16.881 } 00:29:16.881 } 00:29:16.881 ] 00:29:16.881 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.881 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:16.881 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.881 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:16.882 [2024-11-17 18:50:03.348541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:16.882 [2024-11-17 18:50:03.348630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb84a0 (9): Bad file descriptor 00:29:17.140 [2024-11-17 18:50:03.480797] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.140 [ 00:29:17.140 { 00:29:17.140 "name": "nvme0n1", 00:29:17.140 "aliases": [ 00:29:17.140 "21a3cd7a-5f2a-44cc-94de-fafd4a4f55d3" 00:29:17.140 ], 00:29:17.140 "product_name": "NVMe disk", 00:29:17.140 "block_size": 512, 00:29:17.140 "num_blocks": 2097152, 00:29:17.140 "uuid": "21a3cd7a-5f2a-44cc-94de-fafd4a4f55d3", 00:29:17.140 "numa_id": 0, 00:29:17.140 "assigned_rate_limits": { 00:29:17.140 "rw_ios_per_sec": 0, 00:29:17.140 "rw_mbytes_per_sec": 0, 00:29:17.140 "r_mbytes_per_sec": 0, 00:29:17.140 "w_mbytes_per_sec": 0 00:29:17.140 }, 00:29:17.140 "claimed": false, 00:29:17.140 "zoned": false, 00:29:17.140 "supported_io_types": { 00:29:17.140 "read": true, 00:29:17.140 "write": true, 00:29:17.140 "unmap": false, 00:29:17.140 "flush": true, 00:29:17.140 "reset": true, 00:29:17.140 "nvme_admin": true, 00:29:17.140 "nvme_io": true, 00:29:17.140 "nvme_io_md": false, 00:29:17.140 "write_zeroes": true, 00:29:17.140 "zcopy": false, 00:29:17.140 "get_zone_info": false, 00:29:17.140 "zone_management": false, 00:29:17.140 "zone_append": false, 00:29:17.140 "compare": true, 00:29:17.140 "compare_and_write": true, 00:29:17.140 "abort": true, 00:29:17.140 "seek_hole": false, 00:29:17.140 "seek_data": false, 00:29:17.140 "copy": true, 00:29:17.140 "nvme_iov_md": false 00:29:17.140 }, 00:29:17.140 "memory_domains": [ 00:29:17.140 { 00:29:17.140 "dma_device_id": "system", 00:29:17.140 "dma_device_type": 1 00:29:17.140 } 00:29:17.140 ], 00:29:17.140 "driver_specific": { 00:29:17.140 "nvme": [ 00:29:17.140 { 00:29:17.140 "trid": { 00:29:17.140 "trtype": "TCP", 00:29:17.140 "adrfam": "IPv4", 00:29:17.140 "traddr": "10.0.0.2", 00:29:17.140 "trsvcid": "4420", 00:29:17.140 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:17.140 }, 00:29:17.140 "ctrlr_data": { 00:29:17.140 "cntlid": 2, 00:29:17.140 "vendor_id": "0x8086", 00:29:17.140 "model_number": "SPDK bdev Controller", 00:29:17.140 "serial_number": "00000000000000000000", 00:29:17.140 "firmware_revision": "25.01", 00:29:17.140 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.140 "oacs": { 00:29:17.140 "security": 0, 00:29:17.140 "format": 0, 00:29:17.140 "firmware": 0, 00:29:17.140 "ns_manage": 0 00:29:17.140 }, 00:29:17.140 "multi_ctrlr": true, 00:29:17.140 "ana_reporting": false 00:29:17.140 }, 00:29:17.140 "vs": { 00:29:17.140 "nvme_version": "1.3" 00:29:17.140 }, 00:29:17.140 "ns_data": { 00:29:17.140 "id": 1, 00:29:17.140 "can_share": true 00:29:17.140 } 00:29:17.140 } 00:29:17.140 ], 00:29:17.140 "mp_policy": "active_passive" 00:29:17.140 } 00:29:17.140 } 00:29:17.140 ] 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6FuzlsejfP 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6FuzlsejfP 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.6FuzlsejfP 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.140 [2024-11-17 18:50:03.533137] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:17.140 [2024-11-17 18:50:03.533257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.140 [2024-11-17 18:50:03.549181] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:17.140 nvme0n1 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.140 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.140 [ 00:29:17.140 { 00:29:17.140 "name": "nvme0n1", 00:29:17.140 "aliases": [ 00:29:17.140 "21a3cd7a-5f2a-44cc-94de-fafd4a4f55d3" 00:29:17.140 ], 00:29:17.140 "product_name": "NVMe disk", 00:29:17.140 "block_size": 512, 00:29:17.140 "num_blocks": 2097152, 00:29:17.140 "uuid": "21a3cd7a-5f2a-44cc-94de-fafd4a4f55d3", 00:29:17.140 "numa_id": 0, 00:29:17.140 "assigned_rate_limits": { 00:29:17.140 "rw_ios_per_sec": 0, 00:29:17.140 "rw_mbytes_per_sec": 0, 00:29:17.140 "r_mbytes_per_sec": 0, 00:29:17.140 "w_mbytes_per_sec": 0 00:29:17.140 }, 00:29:17.140 "claimed": false, 00:29:17.140 "zoned": false, 00:29:17.140 "supported_io_types": { 00:29:17.140 "read": true, 00:29:17.140 "write": true, 00:29:17.140 "unmap": false, 00:29:17.140 "flush": true, 00:29:17.140 "reset": true, 00:29:17.140 "nvme_admin": true, 00:29:17.140 "nvme_io": true, 00:29:17.140 "nvme_io_md": false, 00:29:17.140 "write_zeroes": true, 00:29:17.140 "zcopy": false, 00:29:17.140 "get_zone_info": false, 00:29:17.140 "zone_management": false, 00:29:17.140 "zone_append": false, 00:29:17.140 "compare": true, 00:29:17.140 "compare_and_write": true, 00:29:17.141 "abort": true, 00:29:17.141 "seek_hole": false, 00:29:17.141 "seek_data": false, 00:29:17.141 "copy": true, 00:29:17.141 "nvme_iov_md": false 00:29:17.141 }, 00:29:17.141 "memory_domains": [ 00:29:17.141 { 00:29:17.141 "dma_device_id": "system", 00:29:17.141 "dma_device_type": 1 00:29:17.141 } 00:29:17.141 ], 00:29:17.141 "driver_specific": { 00:29:17.141 "nvme": [ 00:29:17.141 { 00:29:17.141 "trid": { 00:29:17.141 "trtype": "TCP", 00:29:17.141 "adrfam": "IPv4", 00:29:17.141 "traddr": "10.0.0.2", 00:29:17.141 "trsvcid": "4421", 00:29:17.141 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:17.141 }, 00:29:17.141 "ctrlr_data": { 00:29:17.141 "cntlid": 3, 00:29:17.141 "vendor_id": "0x8086", 00:29:17.141 "model_number": "SPDK bdev Controller", 00:29:17.141 "serial_number": "00000000000000000000", 00:29:17.141 "firmware_revision": "25.01", 00:29:17.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.141 "oacs": { 00:29:17.141 "security": 0, 00:29:17.141 "format": 0, 00:29:17.141 "firmware": 0, 00:29:17.141 "ns_manage": 0 00:29:17.141 }, 00:29:17.141 "multi_ctrlr": true, 00:29:17.141 "ana_reporting": false 00:29:17.141 }, 00:29:17.141 "vs": { 00:29:17.141 "nvme_version": "1.3" 00:29:17.141 }, 00:29:17.141 "ns_data": { 00:29:17.141 "id": 1, 00:29:17.141 "can_share": true 00:29:17.141 } 00:29:17.141 } 00:29:17.141 ], 00:29:17.141 "mp_policy": "active_passive" 00:29:17.141 } 00:29:17.141 } 00:29:17.141 ] 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.6FuzlsejfP 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:17.141 rmmod nvme_tcp 00:29:17.141 rmmod nvme_fabrics 00:29:17.141 rmmod nvme_keyring 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 827610 ']' 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 827610 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 827610 ']' 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 827610 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.141 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 827610 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 827610' 00:29:17.400 killing process with pid 827610 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 827610 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 827610 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.400 18:50:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.938 18:50:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:19.938 00:29:19.938 real 0m5.659s 00:29:19.938 user 0m2.048s 00:29:19.938 sys 0m1.909s 00:29:19.938 18:50:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.938 18:50:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:19.938 ************************************ 00:29:19.938 END TEST nvmf_async_init 00:29:19.938 ************************************ 00:29:19.938 18:50:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:19.938 18:50:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:19.938 18:50:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.938 18:50:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.938 ************************************ 00:29:19.938 START TEST dma 00:29:19.938 ************************************ 00:29:19.938 18:50:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:19.938 * Looking for test storage... 00:29:19.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.938 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:19.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.939 --rc genhtml_branch_coverage=1 00:29:19.939 --rc genhtml_function_coverage=1 00:29:19.939 --rc genhtml_legend=1 00:29:19.939 --rc geninfo_all_blocks=1 00:29:19.939 --rc geninfo_unexecuted_blocks=1 00:29:19.939 00:29:19.939 ' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:19.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.939 --rc genhtml_branch_coverage=1 00:29:19.939 --rc genhtml_function_coverage=1 00:29:19.939 --rc genhtml_legend=1 00:29:19.939 --rc geninfo_all_blocks=1 00:29:19.939 --rc geninfo_unexecuted_blocks=1 00:29:19.939 00:29:19.939 ' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:19.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.939 --rc genhtml_branch_coverage=1 00:29:19.939 --rc genhtml_function_coverage=1 00:29:19.939 --rc genhtml_legend=1 00:29:19.939 --rc geninfo_all_blocks=1 00:29:19.939 --rc geninfo_unexecuted_blocks=1 00:29:19.939 00:29:19.939 ' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:19.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.939 --rc genhtml_branch_coverage=1 00:29:19.939 --rc genhtml_function_coverage=1 00:29:19.939 --rc genhtml_legend=1 00:29:19.939 --rc geninfo_all_blocks=1 00:29:19.939 --rc geninfo_unexecuted_blocks=1 00:29:19.939 00:29:19.939 ' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:19.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:19.939 00:29:19.939 real 0m0.171s 00:29:19.939 user 0m0.128s 00:29:19.939 sys 0m0.053s 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:19.939 ************************************ 00:29:19.939 END TEST dma 00:29:19.939 ************************************ 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.939 ************************************ 00:29:19.939 START TEST nvmf_identify 00:29:19.939 ************************************ 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:19.939 * Looking for test storage... 00:29:19.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.939 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:19.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.940 --rc genhtml_branch_coverage=1 00:29:19.940 --rc genhtml_function_coverage=1 00:29:19.940 --rc genhtml_legend=1 00:29:19.940 --rc geninfo_all_blocks=1 00:29:19.940 --rc geninfo_unexecuted_blocks=1 00:29:19.940 00:29:19.940 ' 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:19.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.940 --rc genhtml_branch_coverage=1 00:29:19.940 --rc genhtml_function_coverage=1 00:29:19.940 --rc genhtml_legend=1 00:29:19.940 --rc geninfo_all_blocks=1 00:29:19.940 --rc geninfo_unexecuted_blocks=1 00:29:19.940 00:29:19.940 ' 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:19.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.940 --rc genhtml_branch_coverage=1 00:29:19.940 --rc genhtml_function_coverage=1 00:29:19.940 --rc genhtml_legend=1 00:29:19.940 --rc geninfo_all_blocks=1 00:29:19.940 --rc geninfo_unexecuted_blocks=1 00:29:19.940 00:29:19.940 ' 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:19.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.940 --rc genhtml_branch_coverage=1 00:29:19.940 --rc genhtml_function_coverage=1 00:29:19.940 --rc genhtml_legend=1 00:29:19.940 --rc geninfo_all_blocks=1 00:29:19.940 --rc geninfo_unexecuted_blocks=1 00:29:19.940 00:29:19.940 ' 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:19.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:19.940 18:50:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:22.474 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:22.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.474 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:22.475 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:22.475 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:29:22.475 00:29:22.475 --- 10.0.0.2 ping statistics --- 00:29:22.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.475 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:29:22.475 00:29:22.475 --- 10.0.0.1 ping statistics --- 00:29:22.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.475 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=829756 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 829756 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 829756 ']' 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.475 18:50:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.475 [2024-11-17 18:50:08.806255] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:22.475 [2024-11-17 18:50:08.806348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.475 [2024-11-17 18:50:08.878289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:22.475 [2024-11-17 18:50:08.924490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.475 [2024-11-17 18:50:08.924552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.475 [2024-11-17 18:50:08.924579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.475 [2024-11-17 18:50:08.924591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.475 [2024-11-17 18:50:08.924600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.475 [2024-11-17 18:50:08.926084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.475 [2024-11-17 18:50:08.926142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.475 [2024-11-17 18:50:08.926205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.475 [2024-11-17 18:50:08.926208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.475 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.475 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:22.475 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.475 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.475 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.475 [2024-11-17 18:50:09.041380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.735 Malloc0 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.735 [2024-11-17 18:50:09.125249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:22.735 [ 00:29:22.735 { 00:29:22.735 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:22.735 "subtype": "Discovery", 00:29:22.735 "listen_addresses": [ 00:29:22.735 { 00:29:22.735 "trtype": "TCP", 00:29:22.735 "adrfam": "IPv4", 00:29:22.735 "traddr": "10.0.0.2", 00:29:22.735 "trsvcid": "4420" 00:29:22.735 } 00:29:22.735 ], 00:29:22.735 "allow_any_host": true, 00:29:22.735 "hosts": [] 00:29:22.735 }, 00:29:22.735 { 00:29:22.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.735 "subtype": "NVMe", 00:29:22.735 "listen_addresses": [ 00:29:22.735 { 00:29:22.735 "trtype": "TCP", 00:29:22.735 "adrfam": "IPv4", 00:29:22.735 "traddr": "10.0.0.2", 00:29:22.735 "trsvcid": "4420" 00:29:22.735 } 00:29:22.735 ], 00:29:22.735 "allow_any_host": true, 00:29:22.735 "hosts": [], 00:29:22.735 "serial_number": "SPDK00000000000001", 00:29:22.735 "model_number": "SPDK bdev Controller", 00:29:22.735 "max_namespaces": 32, 00:29:22.735 "min_cntlid": 1, 00:29:22.735 "max_cntlid": 65519, 00:29:22.735 "namespaces": [ 00:29:22.735 { 00:29:22.735 "nsid": 1, 00:29:22.735 "bdev_name": "Malloc0", 00:29:22.735 "name": "Malloc0", 00:29:22.735 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:22.735 "eui64": "ABCDEF0123456789", 00:29:22.735 "uuid": "ef7ad289-d262-41a8-8a59-0847bca165a5" 00:29:22.735 } 00:29:22.735 ] 00:29:22.735 } 00:29:22.735 ] 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.735 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:22.735 [2024-11-17 18:50:09.164084] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:22.735 [2024-11-17 18:50:09.164124] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829903 ] 00:29:22.735 [2024-11-17 18:50:09.214900] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:22.735 [2024-11-17 18:50:09.214965] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:22.735 [2024-11-17 18:50:09.214977] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:22.735 [2024-11-17 18:50:09.215005] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:22.735 [2024-11-17 18:50:09.215020] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:22.735 [2024-11-17 18:50:09.219111] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:22.735 [2024-11-17 18:50:09.219179] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c94d80 0 00:29:22.735 [2024-11-17 18:50:09.219386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:22.735 [2024-11-17 18:50:09.219404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:22.735 [2024-11-17 18:50:09.219412] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:22.736 [2024-11-17 18:50:09.219419] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:22.736 [2024-11-17 18:50:09.219457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.219470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.219477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c94d80) 00:29:22.736 [2024-11-17 18:50:09.219498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:22.736 [2024-11-17 18:50:09.219525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00480, cid 0, qid 0 00:29:22.736 [2024-11-17 18:50:09.226702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.736 [2024-11-17 18:50:09.226720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.736 [2024-11-17 18:50:09.226728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.226735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00480) on tqpair=0x1c94d80 00:29:22.736 [2024-11-17 18:50:09.226751] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:22.736 [2024-11-17 18:50:09.226762] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:22.736 [2024-11-17 18:50:09.226772] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:22.736 [2024-11-17 18:50:09.226793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.226802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.226809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c94d80) 00:29:22.736 [2024-11-17 18:50:09.226820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.736 [2024-11-17 18:50:09.226844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00480, cid 0, qid 0 00:29:22.736 [2024-11-17 18:50:09.226994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.736 [2024-11-17 18:50:09.227008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.736 [2024-11-17 18:50:09.227015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00480) on tqpair=0x1c94d80 00:29:22.736 [2024-11-17 18:50:09.227031] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:22.736 [2024-11-17 18:50:09.227044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:22.736 [2024-11-17 18:50:09.227057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c94d80) 00:29:22.736 [2024-11-17 18:50:09.227082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.736 [2024-11-17 18:50:09.227104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00480, cid 0, qid 0 00:29:22.736 [2024-11-17 18:50:09.227181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.736 [2024-11-17 18:50:09.227195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.736 [2024-11-17 18:50:09.227202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00480) on tqpair=0x1c94d80 00:29:22.736 [2024-11-17 18:50:09.227219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:22.736 [2024-11-17 18:50:09.227233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:22.736 [2024-11-17 18:50:09.227245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c94d80) 00:29:22.736 [2024-11-17 18:50:09.227274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.736 [2024-11-17 18:50:09.227297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00480, cid 0, qid 0 00:29:22.736 [2024-11-17 18:50:09.227374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.736 [2024-11-17 18:50:09.227388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.736 [2024-11-17 18:50:09.227394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00480) on tqpair=0x1c94d80 00:29:22.736 [2024-11-17 18:50:09.227410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:22.736 [2024-11-17 18:50:09.227426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c94d80) 00:29:22.736 [2024-11-17 18:50:09.227452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.736 [2024-11-17 18:50:09.227473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00480, cid 0, qid 0 00:29:22.736 [2024-11-17 18:50:09.227574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.736 [2024-11-17 18:50:09.227588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.736 [2024-11-17 18:50:09.227595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00480) on tqpair=0x1c94d80 00:29:22.736 [2024-11-17 18:50:09.227610] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:22.736 [2024-11-17 18:50:09.227619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:22.736 [2024-11-17 18:50:09.227632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:22.736 [2024-11-17 18:50:09.227742] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:22.736 [2024-11-17 18:50:09.227753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:22.736 [2024-11-17 18:50:09.227767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c94d80) 00:29:22.736 [2024-11-17 18:50:09.227792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.736 [2024-11-17 18:50:09.227813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00480, cid 0, qid 0 00:29:22.736 [2024-11-17 18:50:09.227945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.736 [2024-11-17 18:50:09.227958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.736 [2024-11-17 18:50:09.227966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.227972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00480) on tqpair=0x1c94d80 00:29:22.736 [2024-11-17 18:50:09.227981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:22.736 [2024-11-17 18:50:09.227997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.228007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.228020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c94d80) 00:29:22.736 [2024-11-17 18:50:09.228031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.736 [2024-11-17 18:50:09.228052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00480, cid 0, qid 0 00:29:22.736 [2024-11-17 18:50:09.228125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.736 [2024-11-17 18:50:09.228139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.736 [2024-11-17 18:50:09.228146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.228152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00480) on tqpair=0x1c94d80 00:29:22.736 [2024-11-17 18:50:09.228160] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:22.736 [2024-11-17 18:50:09.228168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:22.736 [2024-11-17 18:50:09.228182] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:22.736 [2024-11-17 18:50:09.228197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:22.736 [2024-11-17 18:50:09.228212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.228220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c94d80) 00:29:22.736 [2024-11-17 18:50:09.228231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.736 [2024-11-17 18:50:09.228252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00480, cid 0, qid 0 00:29:22.736 [2024-11-17 18:50:09.228392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.736 [2024-11-17 18:50:09.228406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.736 [2024-11-17 18:50:09.228414] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.228421] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c94d80): datao=0, datal=4096, cccid=0 00:29:22.736 [2024-11-17 18:50:09.228429] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d00480) on tqpair(0x1c94d80): expected_datao=0, payload_size=4096 00:29:22.736 [2024-11-17 18:50:09.228436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.228447] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.228456] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.228475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.736 [2024-11-17 18:50:09.228487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.736 [2024-11-17 18:50:09.228494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.736 [2024-11-17 18:50:09.228500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00480) on tqpair=0x1c94d80 00:29:22.736 [2024-11-17 18:50:09.228512] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:22.737 [2024-11-17 18:50:09.228521] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:22.737 [2024-11-17 18:50:09.228529] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:22.737 [2024-11-17 18:50:09.228543] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:22.737 [2024-11-17 18:50:09.228552] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:22.737 [2024-11-17 18:50:09.228564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:22.737 [2024-11-17 18:50:09.228587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:22.737 [2024-11-17 18:50:09.228601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c94d80) 00:29:22.737 [2024-11-17 18:50:09.228626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:22.737 [2024-11-17 18:50:09.228648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00480, cid 0, qid 0 00:29:22.737 [2024-11-17 18:50:09.228777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.737 [2024-11-17 18:50:09.228792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.737 [2024-11-17 18:50:09.228799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00480) on tqpair=0x1c94d80 00:29:22.737 [2024-11-17 18:50:09.228817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c94d80) 00:29:22.737 [2024-11-17 18:50:09.228841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.737 [2024-11-17 18:50:09.228851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c94d80) 00:29:22.737 [2024-11-17 18:50:09.228874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.737 [2024-11-17 18:50:09.228883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c94d80) 00:29:22.737 [2024-11-17 18:50:09.228905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.737 [2024-11-17 18:50:09.228915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c94d80) 00:29:22.737 [2024-11-17 18:50:09.228936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.737 [2024-11-17 18:50:09.228945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:22.737 [2024-11-17 18:50:09.228959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:22.737 [2024-11-17 18:50:09.228971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.228978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c94d80) 00:29:22.737 [2024-11-17 18:50:09.228988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.737 [2024-11-17 18:50:09.229016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00480, cid 0, qid 0 00:29:22.737 [2024-11-17 18:50:09.229028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00600, cid 1, qid 0 00:29:22.737 [2024-11-17 18:50:09.229036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00780, cid 2, qid 0 00:29:22.737 [2024-11-17 18:50:09.229043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00900, cid 3, qid 0 00:29:22.737 [2024-11-17 18:50:09.229050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00a80, cid 4, qid 0 00:29:22.737 [2024-11-17 18:50:09.229174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.737 [2024-11-17 18:50:09.229186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.737 [2024-11-17 18:50:09.229193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.229200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00a80) on tqpair=0x1c94d80 00:29:22.737 [2024-11-17 18:50:09.229213] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:22.737 [2024-11-17 18:50:09.229224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:22.737 [2024-11-17 18:50:09.229241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.229250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c94d80) 00:29:22.737 [2024-11-17 18:50:09.229261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.737 [2024-11-17 18:50:09.229282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00a80, cid 4, qid 0 00:29:22.737 [2024-11-17 18:50:09.229405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.737 [2024-11-17 18:50:09.229419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.737 [2024-11-17 18:50:09.229426] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.229433] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c94d80): datao=0, datal=4096, cccid=4 00:29:22.737 [2024-11-17 18:50:09.229441] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d00a80) on tqpair(0x1c94d80): expected_datao=0, payload_size=4096 00:29:22.737 [2024-11-17 18:50:09.229448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.229464] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.229474] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.274689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.737 [2024-11-17 18:50:09.274708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.737 [2024-11-17 18:50:09.274731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.274738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00a80) on tqpair=0x1c94d80 00:29:22.737 [2024-11-17 18:50:09.274759] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:22.737 [2024-11-17 18:50:09.274795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.274807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c94d80) 00:29:22.737 [2024-11-17 18:50:09.274818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.737 [2024-11-17 18:50:09.274831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.274838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.274845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c94d80) 00:29:22.737 [2024-11-17 18:50:09.274854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.737 [2024-11-17 18:50:09.274887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00a80, cid 4, qid 0 00:29:22.737 [2024-11-17 18:50:09.274900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00c00, cid 5, qid 0 00:29:22.737 [2024-11-17 18:50:09.275079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.737 [2024-11-17 18:50:09.275094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.737 [2024-11-17 18:50:09.275101] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.275108] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c94d80): datao=0, datal=1024, cccid=4 00:29:22.737 [2024-11-17 18:50:09.275115] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d00a80) on tqpair(0x1c94d80): expected_datao=0, payload_size=1024 00:29:22.737 [2024-11-17 18:50:09.275123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.275133] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.275141] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.275149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.737 [2024-11-17 18:50:09.275159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.737 [2024-11-17 18:50:09.275166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.737 [2024-11-17 18:50:09.275172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00c00) on tqpair=0x1c94d80 00:29:22.999 [2024-11-17 18:50:09.315789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.999 [2024-11-17 18:50:09.315808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.999 [2024-11-17 18:50:09.315816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.315823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00a80) on tqpair=0x1c94d80 00:29:22.999 [2024-11-17 18:50:09.315841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.315850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c94d80) 00:29:22.999 [2024-11-17 18:50:09.315862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.999 [2024-11-17 18:50:09.315892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00a80, cid 4, qid 0 00:29:22.999 [2024-11-17 18:50:09.316067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.999 [2024-11-17 18:50:09.316080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.999 [2024-11-17 18:50:09.316087] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.316094] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c94d80): datao=0, datal=3072, cccid=4 00:29:22.999 [2024-11-17 18:50:09.316101] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d00a80) on tqpair(0x1c94d80): expected_datao=0, payload_size=3072 00:29:22.999 [2024-11-17 18:50:09.316109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.316119] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.316127] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.316147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.999 [2024-11-17 18:50:09.316160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.999 [2024-11-17 18:50:09.316167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.316173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00a80) on tqpair=0x1c94d80 00:29:22.999 [2024-11-17 18:50:09.316188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.316196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c94d80) 00:29:22.999 [2024-11-17 18:50:09.316207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.999 [2024-11-17 18:50:09.316240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00a80, cid 4, qid 0 00:29:22.999 [2024-11-17 18:50:09.316339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:22.999 [2024-11-17 18:50:09.316353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:22.999 [2024-11-17 18:50:09.316360] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.316366] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c94d80): datao=0, datal=8, cccid=4 00:29:22.999 [2024-11-17 18:50:09.316374] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d00a80) on tqpair(0x1c94d80): expected_datao=0, payload_size=8 00:29:22.999 [2024-11-17 18:50:09.316381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.316391] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.316398] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.356777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:22.999 [2024-11-17 18:50:09.356796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:22.999 [2024-11-17 18:50:09.356804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:22.999 [2024-11-17 18:50:09.356811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00a80) on tqpair=0x1c94d80 00:29:22.999 ===================================================== 00:29:22.999 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:22.999 ===================================================== 00:29:22.999 Controller Capabilities/Features 00:29:22.999 ================================ 00:29:22.999 Vendor ID: 0000 00:29:22.999 Subsystem Vendor ID: 0000 00:29:22.999 Serial Number: .................... 00:29:22.999 Model Number: ........................................ 00:29:22.999 Firmware Version: 25.01 00:29:22.999 Recommended Arb Burst: 0 00:29:22.999 IEEE OUI Identifier: 00 00 00 00:29:22.999 Multi-path I/O 00:29:22.999 May have multiple subsystem ports: No 00:29:22.999 May have multiple controllers: No 00:29:22.999 Associated with SR-IOV VF: No 00:29:22.999 Max Data Transfer Size: 131072 00:29:22.999 Max Number of Namespaces: 0 00:29:22.999 Max Number of I/O Queues: 1024 00:29:22.999 NVMe Specification Version (VS): 1.3 00:29:22.999 NVMe Specification Version (Identify): 1.3 00:29:22.999 Maximum Queue Entries: 128 00:29:22.999 Contiguous Queues Required: Yes 00:29:22.999 Arbitration Mechanisms Supported 00:29:22.999 Weighted Round Robin: Not Supported 00:29:22.999 Vendor Specific: Not Supported 00:29:22.999 Reset Timeout: 15000 ms 00:29:22.999 Doorbell Stride: 4 bytes 00:29:22.999 NVM Subsystem Reset: Not Supported 00:29:22.999 Command Sets Supported 00:29:22.999 NVM Command Set: Supported 00:29:23.000 Boot Partition: Not Supported 00:29:23.000 Memory Page Size Minimum: 4096 bytes 00:29:23.000 Memory Page Size Maximum: 4096 bytes 00:29:23.000 Persistent Memory Region: Not Supported 00:29:23.000 Optional Asynchronous Events Supported 00:29:23.000 Namespace Attribute Notices: Not Supported 00:29:23.000 Firmware Activation Notices: Not Supported 00:29:23.000 ANA Change Notices: Not Supported 00:29:23.000 PLE Aggregate Log Change Notices: Not Supported 00:29:23.000 LBA Status Info Alert Notices: Not Supported 00:29:23.000 EGE Aggregate Log Change Notices: Not Supported 00:29:23.000 Normal NVM Subsystem Shutdown event: Not Supported 00:29:23.000 Zone Descriptor Change Notices: Not Supported 00:29:23.000 Discovery Log Change Notices: Supported 00:29:23.000 Controller Attributes 00:29:23.000 128-bit Host Identifier: Not Supported 00:29:23.000 Non-Operational Permissive Mode: Not Supported 00:29:23.000 NVM Sets: Not Supported 00:29:23.000 Read Recovery Levels: Not Supported 00:29:23.000 Endurance Groups: Not Supported 00:29:23.000 Predictable Latency Mode: Not Supported 00:29:23.000 Traffic Based Keep ALive: Not Supported 00:29:23.000 Namespace Granularity: Not Supported 00:29:23.000 SQ Associations: Not Supported 00:29:23.000 UUID List: Not Supported 00:29:23.000 Multi-Domain Subsystem: Not Supported 00:29:23.000 Fixed Capacity Management: Not Supported 00:29:23.000 Variable Capacity Management: Not Supported 00:29:23.000 Delete Endurance Group: Not Supported 00:29:23.000 Delete NVM Set: Not Supported 00:29:23.000 Extended LBA Formats Supported: Not Supported 00:29:23.000 Flexible Data Placement Supported: Not Supported 00:29:23.000 00:29:23.000 Controller Memory Buffer Support 00:29:23.000 ================================ 00:29:23.000 Supported: No 00:29:23.000 00:29:23.000 Persistent Memory Region Support 00:29:23.000 ================================ 00:29:23.000 Supported: No 00:29:23.000 00:29:23.000 Admin Command Set Attributes 00:29:23.000 ============================ 00:29:23.000 Security Send/Receive: Not Supported 00:29:23.000 Format NVM: Not Supported 00:29:23.000 Firmware Activate/Download: Not Supported 00:29:23.000 Namespace Management: Not Supported 00:29:23.000 Device Self-Test: Not Supported 00:29:23.000 Directives: Not Supported 00:29:23.000 NVMe-MI: Not Supported 00:29:23.000 Virtualization Management: Not Supported 00:29:23.000 Doorbell Buffer Config: Not Supported 00:29:23.000 Get LBA Status Capability: Not Supported 00:29:23.000 Command & Feature Lockdown Capability: Not Supported 00:29:23.000 Abort Command Limit: 1 00:29:23.000 Async Event Request Limit: 4 00:29:23.000 Number of Firmware Slots: N/A 00:29:23.000 Firmware Slot 1 Read-Only: N/A 00:29:23.000 Firmware Activation Without Reset: N/A 00:29:23.000 Multiple Update Detection Support: N/A 00:29:23.000 Firmware Update Granularity: No Information Provided 00:29:23.000 Per-Namespace SMART Log: No 00:29:23.000 Asymmetric Namespace Access Log Page: Not Supported 00:29:23.000 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:23.000 Command Effects Log Page: Not Supported 00:29:23.000 Get Log Page Extended Data: Supported 00:29:23.000 Telemetry Log Pages: Not Supported 00:29:23.000 Persistent Event Log Pages: Not Supported 00:29:23.000 Supported Log Pages Log Page: May Support 00:29:23.000 Commands Supported & Effects Log Page: Not Supported 00:29:23.000 Feature Identifiers & Effects Log Page:May Support 00:29:23.000 NVMe-MI Commands & Effects Log Page: May Support 00:29:23.000 Data Area 4 for Telemetry Log: Not Supported 00:29:23.000 Error Log Page Entries Supported: 128 00:29:23.000 Keep Alive: Not Supported 00:29:23.000 00:29:23.000 NVM Command Set Attributes 00:29:23.000 ========================== 00:29:23.000 Submission Queue Entry Size 00:29:23.000 Max: 1 00:29:23.000 Min: 1 00:29:23.000 Completion Queue Entry Size 00:29:23.000 Max: 1 00:29:23.000 Min: 1 00:29:23.000 Number of Namespaces: 0 00:29:23.000 Compare Command: Not Supported 00:29:23.000 Write Uncorrectable Command: Not Supported 00:29:23.000 Dataset Management Command: Not Supported 00:29:23.000 Write Zeroes Command: Not Supported 00:29:23.000 Set Features Save Field: Not Supported 00:29:23.000 Reservations: Not Supported 00:29:23.000 Timestamp: Not Supported 00:29:23.000 Copy: Not Supported 00:29:23.000 Volatile Write Cache: Not Present 00:29:23.000 Atomic Write Unit (Normal): 1 00:29:23.000 Atomic Write Unit (PFail): 1 00:29:23.000 Atomic Compare & Write Unit: 1 00:29:23.000 Fused Compare & Write: Supported 00:29:23.000 Scatter-Gather List 00:29:23.000 SGL Command Set: Supported 00:29:23.000 SGL Keyed: Supported 00:29:23.000 SGL Bit Bucket Descriptor: Not Supported 00:29:23.000 SGL Metadata Pointer: Not Supported 00:29:23.000 Oversized SGL: Not Supported 00:29:23.000 SGL Metadata Address: Not Supported 00:29:23.000 SGL Offset: Supported 00:29:23.000 Transport SGL Data Block: Not Supported 00:29:23.000 Replay Protected Memory Block: Not Supported 00:29:23.000 00:29:23.000 Firmware Slot Information 00:29:23.000 ========================= 00:29:23.000 Active slot: 0 00:29:23.000 00:29:23.000 00:29:23.000 Error Log 00:29:23.000 ========= 00:29:23.000 00:29:23.000 Active Namespaces 00:29:23.000 ================= 00:29:23.000 Discovery Log Page 00:29:23.000 ================== 00:29:23.000 Generation Counter: 2 00:29:23.000 Number of Records: 2 00:29:23.000 Record Format: 0 00:29:23.000 00:29:23.000 Discovery Log Entry 0 00:29:23.000 ---------------------- 00:29:23.000 Transport Type: 3 (TCP) 00:29:23.000 Address Family: 1 (IPv4) 00:29:23.000 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:23.000 Entry Flags: 00:29:23.000 Duplicate Returned Information: 1 00:29:23.000 Explicit Persistent Connection Support for Discovery: 1 00:29:23.000 Transport Requirements: 00:29:23.000 Secure Channel: Not Required 00:29:23.000 Port ID: 0 (0x0000) 00:29:23.000 Controller ID: 65535 (0xffff) 00:29:23.000 Admin Max SQ Size: 128 00:29:23.000 Transport Service Identifier: 4420 00:29:23.000 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:23.000 Transport Address: 10.0.0.2 00:29:23.000 Discovery Log Entry 1 00:29:23.000 ---------------------- 00:29:23.000 Transport Type: 3 (TCP) 00:29:23.000 Address Family: 1 (IPv4) 00:29:23.000 Subsystem Type: 2 (NVM Subsystem) 00:29:23.000 Entry Flags: 00:29:23.000 Duplicate Returned Information: 0 00:29:23.000 Explicit Persistent Connection Support for Discovery: 0 00:29:23.000 Transport Requirements: 00:29:23.000 Secure Channel: Not Required 00:29:23.000 Port ID: 0 (0x0000) 00:29:23.000 Controller ID: 65535 (0xffff) 00:29:23.000 Admin Max SQ Size: 128 00:29:23.000 Transport Service Identifier: 4420 00:29:23.000 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:23.000 Transport Address: 10.0.0.2 [2024-11-17 18:50:09.356926] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:23.000 [2024-11-17 18:50:09.356948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00480) on tqpair=0x1c94d80 00:29:23.000 [2024-11-17 18:50:09.356961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.000 [2024-11-17 18:50:09.356970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00600) on tqpair=0x1c94d80 00:29:23.000 [2024-11-17 18:50:09.356978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.000 [2024-11-17 18:50:09.356986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00780) on tqpair=0x1c94d80 00:29:23.000 [2024-11-17 18:50:09.356994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.000 [2024-11-17 18:50:09.357002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00900) on tqpair=0x1c94d80 00:29:23.000 [2024-11-17 18:50:09.357009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.000 [2024-11-17 18:50:09.357027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.000 [2024-11-17 18:50:09.357037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.000 [2024-11-17 18:50:09.357044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c94d80) 00:29:23.000 [2024-11-17 18:50:09.357070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.000 [2024-11-17 18:50:09.357095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00900, cid 3, qid 0 00:29:23.000 [2024-11-17 18:50:09.357228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.000 [2024-11-17 18:50:09.357243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.000 [2024-11-17 18:50:09.357250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.000 [2024-11-17 18:50:09.357257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00900) on tqpair=0x1c94d80 00:29:23.000 [2024-11-17 18:50:09.357269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.357277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.357284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c94d80) 00:29:23.001 [2024-11-17 18:50:09.357298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.001 [2024-11-17 18:50:09.357326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00900, cid 3, qid 0 00:29:23.001 [2024-11-17 18:50:09.357414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.001 [2024-11-17 18:50:09.357428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.001 [2024-11-17 18:50:09.357435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.357442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00900) on tqpair=0x1c94d80 00:29:23.001 [2024-11-17 18:50:09.357450] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:23.001 [2024-11-17 18:50:09.357458] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:23.001 [2024-11-17 18:50:09.357474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.357483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.357490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c94d80) 00:29:23.001 [2024-11-17 18:50:09.357500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.001 [2024-11-17 18:50:09.357521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00900, cid 3, qid 0 00:29:23.001 [2024-11-17 18:50:09.357598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.001 [2024-11-17 18:50:09.357612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.001 [2024-11-17 18:50:09.357619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.357626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00900) on tqpair=0x1c94d80 00:29:23.001 [2024-11-17 18:50:09.357642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.357652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.357659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c94d80) 00:29:23.001 [2024-11-17 18:50:09.357669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.001 [2024-11-17 18:50:09.361705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d00900, cid 3, qid 0 00:29:23.001 [2024-11-17 18:50:09.361852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.001 [2024-11-17 18:50:09.361866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.001 [2024-11-17 18:50:09.361873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.361880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d00900) on tqpair=0x1c94d80 00:29:23.001 [2024-11-17 18:50:09.361893] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:29:23.001 00:29:23.001 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:23.001 [2024-11-17 18:50:09.394870] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:23.001 [2024-11-17 18:50:09.394912] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829905 ] 00:29:23.001 [2024-11-17 18:50:09.445337] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:23.001 [2024-11-17 18:50:09.445394] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:23.001 [2024-11-17 18:50:09.445404] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:23.001 [2024-11-17 18:50:09.445417] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:23.001 [2024-11-17 18:50:09.445430] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:23.001 [2024-11-17 18:50:09.445916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:23.001 [2024-11-17 18:50:09.445955] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1631d80 0 00:29:23.001 [2024-11-17 18:50:09.451688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:23.001 [2024-11-17 18:50:09.451708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:23.001 [2024-11-17 18:50:09.451716] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:23.001 [2024-11-17 18:50:09.451722] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:23.001 [2024-11-17 18:50:09.451753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.451765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.451772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1631d80) 00:29:23.001 [2024-11-17 18:50:09.451786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:23.001 [2024-11-17 18:50:09.451813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d480, cid 0, qid 0 00:29:23.001 [2024-11-17 18:50:09.459701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.001 [2024-11-17 18:50:09.459719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.001 [2024-11-17 18:50:09.459727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.459733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d480) on tqpair=0x1631d80 00:29:23.001 [2024-11-17 18:50:09.459747] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:23.001 [2024-11-17 18:50:09.459772] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:23.001 [2024-11-17 18:50:09.459782] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:23.001 [2024-11-17 18:50:09.459801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.459810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.459816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1631d80) 00:29:23.001 [2024-11-17 18:50:09.459828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.001 [2024-11-17 18:50:09.459852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d480, cid 0, qid 0 00:29:23.001 [2024-11-17 18:50:09.459970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.001 [2024-11-17 18:50:09.459984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.001 [2024-11-17 18:50:09.459992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.459999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d480) on tqpair=0x1631d80 00:29:23.001 [2024-11-17 18:50:09.460007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:23.001 [2024-11-17 18:50:09.460020] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:23.001 [2024-11-17 18:50:09.460033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.460040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.460051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1631d80) 00:29:23.001 [2024-11-17 18:50:09.460062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.001 [2024-11-17 18:50:09.460084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d480, cid 0, qid 0 00:29:23.001 [2024-11-17 18:50:09.460166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.001 [2024-11-17 18:50:09.460178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.001 [2024-11-17 18:50:09.460185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.460191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d480) on tqpair=0x1631d80 00:29:23.001 [2024-11-17 18:50:09.460200] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:23.001 [2024-11-17 18:50:09.460213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:23.001 [2024-11-17 18:50:09.460226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.460233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.460240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1631d80) 00:29:23.001 [2024-11-17 18:50:09.460250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.001 [2024-11-17 18:50:09.460271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d480, cid 0, qid 0 00:29:23.001 [2024-11-17 18:50:09.460366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.001 [2024-11-17 18:50:09.460379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.001 [2024-11-17 18:50:09.460387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.460393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d480) on tqpair=0x1631d80 00:29:23.001 [2024-11-17 18:50:09.460402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:23.001 [2024-11-17 18:50:09.460418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.460427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.460434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1631d80) 00:29:23.001 [2024-11-17 18:50:09.460444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.001 [2024-11-17 18:50:09.460465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d480, cid 0, qid 0 00:29:23.001 [2024-11-17 18:50:09.460562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.001 [2024-11-17 18:50:09.460574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.001 [2024-11-17 18:50:09.460582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.001 [2024-11-17 18:50:09.460588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d480) on tqpair=0x1631d80 00:29:23.001 [2024-11-17 18:50:09.460596] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:23.001 [2024-11-17 18:50:09.460604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:23.002 [2024-11-17 18:50:09.460617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:23.002 [2024-11-17 18:50:09.460727] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:23.002 [2024-11-17 18:50:09.460737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:23.002 [2024-11-17 18:50:09.460754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.460762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.460769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1631d80) 00:29:23.002 [2024-11-17 18:50:09.460779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.002 [2024-11-17 18:50:09.460800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d480, cid 0, qid 0 00:29:23.002 [2024-11-17 18:50:09.460907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.002 [2024-11-17 18:50:09.460921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.002 [2024-11-17 18:50:09.460928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.460934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d480) on tqpair=0x1631d80 00:29:23.002 [2024-11-17 18:50:09.460942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:23.002 [2024-11-17 18:50:09.460959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.460968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.460974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1631d80) 00:29:23.002 [2024-11-17 18:50:09.460984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.002 [2024-11-17 18:50:09.461006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d480, cid 0, qid 0 00:29:23.002 [2024-11-17 18:50:09.461085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.002 [2024-11-17 18:50:09.461098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.002 [2024-11-17 18:50:09.461105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d480) on tqpair=0x1631d80 00:29:23.002 [2024-11-17 18:50:09.461119] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:23.002 [2024-11-17 18:50:09.461128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:23.002 [2024-11-17 18:50:09.461141] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:23.002 [2024-11-17 18:50:09.461155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:23.002 [2024-11-17 18:50:09.461169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1631d80) 00:29:23.002 [2024-11-17 18:50:09.461187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.002 [2024-11-17 18:50:09.461208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d480, cid 0, qid 0 00:29:23.002 [2024-11-17 18:50:09.461332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.002 [2024-11-17 18:50:09.461347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.002 [2024-11-17 18:50:09.461354] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461360] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1631d80): datao=0, datal=4096, cccid=0 00:29:23.002 [2024-11-17 18:50:09.461368] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x169d480) on tqpair(0x1631d80): expected_datao=0, payload_size=4096 00:29:23.002 [2024-11-17 18:50:09.461375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461391] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461400] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.002 [2024-11-17 18:50:09.461423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.002 [2024-11-17 18:50:09.461429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d480) on tqpair=0x1631d80 00:29:23.002 [2024-11-17 18:50:09.461447] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:23.002 [2024-11-17 18:50:09.461455] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:23.002 [2024-11-17 18:50:09.461462] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:23.002 [2024-11-17 18:50:09.461473] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:23.002 [2024-11-17 18:50:09.461482] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:23.002 [2024-11-17 18:50:09.461490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:23.002 [2024-11-17 18:50:09.461508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:23.002 [2024-11-17 18:50:09.461521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1631d80) 00:29:23.002 [2024-11-17 18:50:09.461545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:23.002 [2024-11-17 18:50:09.461567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d480, cid 0, qid 0 00:29:23.002 [2024-11-17 18:50:09.461643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.002 [2024-11-17 18:50:09.461657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.002 [2024-11-17 18:50:09.461663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d480) on tqpair=0x1631d80 00:29:23.002 [2024-11-17 18:50:09.461689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1631d80) 00:29:23.002 [2024-11-17 18:50:09.461714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.002 [2024-11-17 18:50:09.461724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1631d80) 00:29:23.002 [2024-11-17 18:50:09.461746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.002 [2024-11-17 18:50:09.461755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1631d80) 00:29:23.002 [2024-11-17 18:50:09.461777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.002 [2024-11-17 18:50:09.461790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.002 [2024-11-17 18:50:09.461812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.002 [2024-11-17 18:50:09.461821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:23.002 [2024-11-17 18:50:09.461836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:23.002 [2024-11-17 18:50:09.461847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.461854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1631d80) 00:29:23.002 [2024-11-17 18:50:09.461864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.002 [2024-11-17 18:50:09.461886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d480, cid 0, qid 0 00:29:23.002 [2024-11-17 18:50:09.461898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d600, cid 1, qid 0 00:29:23.002 [2024-11-17 18:50:09.461906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d780, cid 2, qid 0 00:29:23.002 [2024-11-17 18:50:09.461914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.002 [2024-11-17 18:50:09.461921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169da80, cid 4, qid 0 00:29:23.002 [2024-11-17 18:50:09.462097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.002 [2024-11-17 18:50:09.462110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.002 [2024-11-17 18:50:09.462117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.462124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169da80) on tqpair=0x1631d80 00:29:23.002 [2024-11-17 18:50:09.462136] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:23.002 [2024-11-17 18:50:09.462146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:23.002 [2024-11-17 18:50:09.462160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:23.002 [2024-11-17 18:50:09.462171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:23.002 [2024-11-17 18:50:09.462182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.462189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.002 [2024-11-17 18:50:09.462195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1631d80) 00:29:23.002 [2024-11-17 18:50:09.462205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:23.002 [2024-11-17 18:50:09.462226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169da80, cid 4, qid 0 00:29:23.003 [2024-11-17 18:50:09.462302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.003 [2024-11-17 18:50:09.462315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.003 [2024-11-17 18:50:09.462322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.462329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169da80) on tqpair=0x1631d80 00:29:23.003 [2024-11-17 18:50:09.462397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.462420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.462435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.462443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1631d80) 00:29:23.003 [2024-11-17 18:50:09.462453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.003 [2024-11-17 18:50:09.462474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169da80, cid 4, qid 0 00:29:23.003 [2024-11-17 18:50:09.462607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.003 [2024-11-17 18:50:09.462622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.003 [2024-11-17 18:50:09.462629] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.462635] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1631d80): datao=0, datal=4096, cccid=4 00:29:23.003 [2024-11-17 18:50:09.462643] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x169da80) on tqpair(0x1631d80): expected_datao=0, payload_size=4096 00:29:23.003 [2024-11-17 18:50:09.462650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.462660] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.462668] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.462691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.003 [2024-11-17 18:50:09.462703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.003 [2024-11-17 18:50:09.462709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.462716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169da80) on tqpair=0x1631d80 00:29:23.003 [2024-11-17 18:50:09.462730] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:23.003 [2024-11-17 18:50:09.462750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.462769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.462782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.462790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1631d80) 00:29:23.003 [2024-11-17 18:50:09.462800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.003 [2024-11-17 18:50:09.462822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169da80, cid 4, qid 0 00:29:23.003 [2024-11-17 18:50:09.462955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.003 [2024-11-17 18:50:09.462968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.003 [2024-11-17 18:50:09.462974] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.462980] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1631d80): datao=0, datal=4096, cccid=4 00:29:23.003 [2024-11-17 18:50:09.462988] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x169da80) on tqpair(0x1631d80): expected_datao=0, payload_size=4096 00:29:23.003 [2024-11-17 18:50:09.462995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463005] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463013] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.003 [2024-11-17 18:50:09.463034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.003 [2024-11-17 18:50:09.463041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169da80) on tqpair=0x1631d80 00:29:23.003 [2024-11-17 18:50:09.463071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.463090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.463104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1631d80) 00:29:23.003 [2024-11-17 18:50:09.463122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.003 [2024-11-17 18:50:09.463143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169da80, cid 4, qid 0 00:29:23.003 [2024-11-17 18:50:09.463271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.003 [2024-11-17 18:50:09.463283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.003 [2024-11-17 18:50:09.463290] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463296] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1631d80): datao=0, datal=4096, cccid=4 00:29:23.003 [2024-11-17 18:50:09.463303] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x169da80) on tqpair(0x1631d80): expected_datao=0, payload_size=4096 00:29:23.003 [2024-11-17 18:50:09.463310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463320] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463328] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.003 [2024-11-17 18:50:09.463350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.003 [2024-11-17 18:50:09.463356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169da80) on tqpair=0x1631d80 00:29:23.003 [2024-11-17 18:50:09.463375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.463390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.463405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.463415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.463424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.463432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.463440] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:23.003 [2024-11-17 18:50:09.463448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:23.003 [2024-11-17 18:50:09.463456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:23.003 [2024-11-17 18:50:09.463474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1631d80) 00:29:23.003 [2024-11-17 18:50:09.463494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.003 [2024-11-17 18:50:09.463508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.463522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1631d80) 00:29:23.003 [2024-11-17 18:50:09.463531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.003 [2024-11-17 18:50:09.463556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169da80, cid 4, qid 0 00:29:23.003 [2024-11-17 18:50:09.463568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169dc00, cid 5, qid 0 00:29:23.003 [2024-11-17 18:50:09.467687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.003 [2024-11-17 18:50:09.467703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.003 [2024-11-17 18:50:09.467710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.467717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169da80) on tqpair=0x1631d80 00:29:23.003 [2024-11-17 18:50:09.467727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.003 [2024-11-17 18:50:09.467736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.003 [2024-11-17 18:50:09.467742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.467748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169dc00) on tqpair=0x1631d80 00:29:23.003 [2024-11-17 18:50:09.467764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.467789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1631d80) 00:29:23.003 [2024-11-17 18:50:09.467799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.003 [2024-11-17 18:50:09.467822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169dc00, cid 5, qid 0 00:29:23.003 [2024-11-17 18:50:09.467955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.003 [2024-11-17 18:50:09.467967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.003 [2024-11-17 18:50:09.467974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.467980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169dc00) on tqpair=0x1631d80 00:29:23.003 [2024-11-17 18:50:09.467996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.003 [2024-11-17 18:50:09.468005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1631d80) 00:29:23.003 [2024-11-17 18:50:09.468015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.003 [2024-11-17 18:50:09.468036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169dc00, cid 5, qid 0 00:29:23.003 [2024-11-17 18:50:09.468127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.004 [2024-11-17 18:50:09.468142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.004 [2024-11-17 18:50:09.468150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169dc00) on tqpair=0x1631d80 00:29:23.004 [2024-11-17 18:50:09.468173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1631d80) 00:29:23.004 [2024-11-17 18:50:09.468192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-11-17 18:50:09.468213] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169dc00, cid 5, qid 0 00:29:23.004 [2024-11-17 18:50:09.468288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.004 [2024-11-17 18:50:09.468305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.004 [2024-11-17 18:50:09.468313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169dc00) on tqpair=0x1631d80 00:29:23.004 [2024-11-17 18:50:09.468343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1631d80) 00:29:23.004 [2024-11-17 18:50:09.468365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-11-17 18:50:09.468376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1631d80) 00:29:23.004 [2024-11-17 18:50:09.468393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-11-17 18:50:09.468404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1631d80) 00:29:23.004 [2024-11-17 18:50:09.468421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-11-17 18:50:09.468432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1631d80) 00:29:23.004 [2024-11-17 18:50:09.468449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-11-17 18:50:09.468471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169dc00, cid 5, qid 0 00:29:23.004 [2024-11-17 18:50:09.468482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169da80, cid 4, qid 0 00:29:23.004 [2024-11-17 18:50:09.468490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169dd80, cid 6, qid 0 00:29:23.004 [2024-11-17 18:50:09.468497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169df00, cid 7, qid 0 00:29:23.004 [2024-11-17 18:50:09.468669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.004 [2024-11-17 18:50:09.468691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.004 [2024-11-17 18:50:09.468699] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468706] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1631d80): datao=0, datal=8192, cccid=5 00:29:23.004 [2024-11-17 18:50:09.468713] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x169dc00) on tqpair(0x1631d80): expected_datao=0, payload_size=8192 00:29:23.004 [2024-11-17 18:50:09.468720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468731] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468739] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.004 [2024-11-17 18:50:09.468757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.004 [2024-11-17 18:50:09.468763] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468769] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1631d80): datao=0, datal=512, cccid=4 00:29:23.004 [2024-11-17 18:50:09.468776] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x169da80) on tqpair(0x1631d80): expected_datao=0, payload_size=512 00:29:23.004 [2024-11-17 18:50:09.468783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468793] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468806] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.004 [2024-11-17 18:50:09.468825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.004 [2024-11-17 18:50:09.468832] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468838] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1631d80): datao=0, datal=512, cccid=6 00:29:23.004 [2024-11-17 18:50:09.468845] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x169dd80) on tqpair(0x1631d80): expected_datao=0, payload_size=512 00:29:23.004 [2024-11-17 18:50:09.468852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468862] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468869] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.004 [2024-11-17 18:50:09.468887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.004 [2024-11-17 18:50:09.468893] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468899] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1631d80): datao=0, datal=4096, cccid=7 00:29:23.004 [2024-11-17 18:50:09.468906] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x169df00) on tqpair(0x1631d80): expected_datao=0, payload_size=4096 00:29:23.004 [2024-11-17 18:50:09.468914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468932] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468941] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.004 [2024-11-17 18:50:09.468962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.004 [2024-11-17 18:50:09.468969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.468976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169dc00) on tqpair=0x1631d80 00:29:23.004 [2024-11-17 18:50:09.468994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.004 [2024-11-17 18:50:09.469006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.004 [2024-11-17 18:50:09.469013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.469019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169da80) on tqpair=0x1631d80 00:29:23.004 [2024-11-17 18:50:09.469034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.004 [2024-11-17 18:50:09.469061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.004 [2024-11-17 18:50:09.469067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.469073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169dd80) on tqpair=0x1631d80 00:29:23.004 [2024-11-17 18:50:09.469084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.004 [2024-11-17 18:50:09.469093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.004 [2024-11-17 18:50:09.469100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.004 [2024-11-17 18:50:09.469106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169df00) on tqpair=0x1631d80 00:29:23.004 ===================================================== 00:29:23.004 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.004 ===================================================== 00:29:23.004 Controller Capabilities/Features 00:29:23.004 ================================ 00:29:23.004 Vendor ID: 8086 00:29:23.004 Subsystem Vendor ID: 8086 00:29:23.004 Serial Number: SPDK00000000000001 00:29:23.004 Model Number: SPDK bdev Controller 00:29:23.004 Firmware Version: 25.01 00:29:23.004 Recommended Arb Burst: 6 00:29:23.004 IEEE OUI Identifier: e4 d2 5c 00:29:23.004 Multi-path I/O 00:29:23.004 May have multiple subsystem ports: Yes 00:29:23.004 May have multiple controllers: Yes 00:29:23.004 Associated with SR-IOV VF: No 00:29:23.004 Max Data Transfer Size: 131072 00:29:23.004 Max Number of Namespaces: 32 00:29:23.004 Max Number of I/O Queues: 127 00:29:23.004 NVMe Specification Version (VS): 1.3 00:29:23.004 NVMe Specification Version (Identify): 1.3 00:29:23.004 Maximum Queue Entries: 128 00:29:23.004 Contiguous Queues Required: Yes 00:29:23.004 Arbitration Mechanisms Supported 00:29:23.004 Weighted Round Robin: Not Supported 00:29:23.004 Vendor Specific: Not Supported 00:29:23.004 Reset Timeout: 15000 ms 00:29:23.004 Doorbell Stride: 4 bytes 00:29:23.004 NVM Subsystem Reset: Not Supported 00:29:23.005 Command Sets Supported 00:29:23.005 NVM Command Set: Supported 00:29:23.005 Boot Partition: Not Supported 00:29:23.005 Memory Page Size Minimum: 4096 bytes 00:29:23.005 Memory Page Size Maximum: 4096 bytes 00:29:23.005 Persistent Memory Region: Not Supported 00:29:23.005 Optional Asynchronous Events Supported 00:29:23.005 Namespace Attribute Notices: Supported 00:29:23.005 Firmware Activation Notices: Not Supported 00:29:23.005 ANA Change Notices: Not Supported 00:29:23.005 PLE Aggregate Log Change Notices: Not Supported 00:29:23.005 LBA Status Info Alert Notices: Not Supported 00:29:23.005 EGE Aggregate Log Change Notices: Not Supported 00:29:23.005 Normal NVM Subsystem Shutdown event: Not Supported 00:29:23.005 Zone Descriptor Change Notices: Not Supported 00:29:23.005 Discovery Log Change Notices: Not Supported 00:29:23.005 Controller Attributes 00:29:23.005 128-bit Host Identifier: Supported 00:29:23.005 Non-Operational Permissive Mode: Not Supported 00:29:23.005 NVM Sets: Not Supported 00:29:23.005 Read Recovery Levels: Not Supported 00:29:23.005 Endurance Groups: Not Supported 00:29:23.005 Predictable Latency Mode: Not Supported 00:29:23.005 Traffic Based Keep ALive: Not Supported 00:29:23.005 Namespace Granularity: Not Supported 00:29:23.005 SQ Associations: Not Supported 00:29:23.005 UUID List: Not Supported 00:29:23.005 Multi-Domain Subsystem: Not Supported 00:29:23.005 Fixed Capacity Management: Not Supported 00:29:23.005 Variable Capacity Management: Not Supported 00:29:23.005 Delete Endurance Group: Not Supported 00:29:23.005 Delete NVM Set: Not Supported 00:29:23.005 Extended LBA Formats Supported: Not Supported 00:29:23.005 Flexible Data Placement Supported: Not Supported 00:29:23.005 00:29:23.005 Controller Memory Buffer Support 00:29:23.005 ================================ 00:29:23.005 Supported: No 00:29:23.005 00:29:23.005 Persistent Memory Region Support 00:29:23.005 ================================ 00:29:23.005 Supported: No 00:29:23.005 00:29:23.005 Admin Command Set Attributes 00:29:23.005 ============================ 00:29:23.005 Security Send/Receive: Not Supported 00:29:23.005 Format NVM: Not Supported 00:29:23.005 Firmware Activate/Download: Not Supported 00:29:23.005 Namespace Management: Not Supported 00:29:23.005 Device Self-Test: Not Supported 00:29:23.005 Directives: Not Supported 00:29:23.005 NVMe-MI: Not Supported 00:29:23.005 Virtualization Management: Not Supported 00:29:23.005 Doorbell Buffer Config: Not Supported 00:29:23.005 Get LBA Status Capability: Not Supported 00:29:23.005 Command & Feature Lockdown Capability: Not Supported 00:29:23.005 Abort Command Limit: 4 00:29:23.005 Async Event Request Limit: 4 00:29:23.005 Number of Firmware Slots: N/A 00:29:23.005 Firmware Slot 1 Read-Only: N/A 00:29:23.005 Firmware Activation Without Reset: N/A 00:29:23.005 Multiple Update Detection Support: N/A 00:29:23.005 Firmware Update Granularity: No Information Provided 00:29:23.005 Per-Namespace SMART Log: No 00:29:23.005 Asymmetric Namespace Access Log Page: Not Supported 00:29:23.005 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:23.005 Command Effects Log Page: Supported 00:29:23.005 Get Log Page Extended Data: Supported 00:29:23.005 Telemetry Log Pages: Not Supported 00:29:23.005 Persistent Event Log Pages: Not Supported 00:29:23.005 Supported Log Pages Log Page: May Support 00:29:23.005 Commands Supported & Effects Log Page: Not Supported 00:29:23.005 Feature Identifiers & Effects Log Page:May Support 00:29:23.005 NVMe-MI Commands & Effects Log Page: May Support 00:29:23.005 Data Area 4 for Telemetry Log: Not Supported 00:29:23.005 Error Log Page Entries Supported: 128 00:29:23.005 Keep Alive: Supported 00:29:23.005 Keep Alive Granularity: 10000 ms 00:29:23.005 00:29:23.005 NVM Command Set Attributes 00:29:23.005 ========================== 00:29:23.005 Submission Queue Entry Size 00:29:23.005 Max: 64 00:29:23.005 Min: 64 00:29:23.005 Completion Queue Entry Size 00:29:23.005 Max: 16 00:29:23.005 Min: 16 00:29:23.005 Number of Namespaces: 32 00:29:23.005 Compare Command: Supported 00:29:23.005 Write Uncorrectable Command: Not Supported 00:29:23.005 Dataset Management Command: Supported 00:29:23.005 Write Zeroes Command: Supported 00:29:23.005 Set Features Save Field: Not Supported 00:29:23.005 Reservations: Supported 00:29:23.005 Timestamp: Not Supported 00:29:23.005 Copy: Supported 00:29:23.005 Volatile Write Cache: Present 00:29:23.005 Atomic Write Unit (Normal): 1 00:29:23.005 Atomic Write Unit (PFail): 1 00:29:23.005 Atomic Compare & Write Unit: 1 00:29:23.005 Fused Compare & Write: Supported 00:29:23.005 Scatter-Gather List 00:29:23.005 SGL Command Set: Supported 00:29:23.005 SGL Keyed: Supported 00:29:23.005 SGL Bit Bucket Descriptor: Not Supported 00:29:23.005 SGL Metadata Pointer: Not Supported 00:29:23.005 Oversized SGL: Not Supported 00:29:23.005 SGL Metadata Address: Not Supported 00:29:23.005 SGL Offset: Supported 00:29:23.005 Transport SGL Data Block: Not Supported 00:29:23.005 Replay Protected Memory Block: Not Supported 00:29:23.005 00:29:23.005 Firmware Slot Information 00:29:23.005 ========================= 00:29:23.005 Active slot: 1 00:29:23.005 Slot 1 Firmware Revision: 25.01 00:29:23.005 00:29:23.005 00:29:23.005 Commands Supported and Effects 00:29:23.005 ============================== 00:29:23.005 Admin Commands 00:29:23.005 -------------- 00:29:23.005 Get Log Page (02h): Supported 00:29:23.005 Identify (06h): Supported 00:29:23.005 Abort (08h): Supported 00:29:23.005 Set Features (09h): Supported 00:29:23.005 Get Features (0Ah): Supported 00:29:23.005 Asynchronous Event Request (0Ch): Supported 00:29:23.005 Keep Alive (18h): Supported 00:29:23.005 I/O Commands 00:29:23.005 ------------ 00:29:23.005 Flush (00h): Supported LBA-Change 00:29:23.005 Write (01h): Supported LBA-Change 00:29:23.005 Read (02h): Supported 00:29:23.005 Compare (05h): Supported 00:29:23.005 Write Zeroes (08h): Supported LBA-Change 00:29:23.005 Dataset Management (09h): Supported LBA-Change 00:29:23.005 Copy (19h): Supported LBA-Change 00:29:23.005 00:29:23.005 Error Log 00:29:23.005 ========= 00:29:23.005 00:29:23.005 Arbitration 00:29:23.005 =========== 00:29:23.005 Arbitration Burst: 1 00:29:23.005 00:29:23.005 Power Management 00:29:23.005 ================ 00:29:23.005 Number of Power States: 1 00:29:23.005 Current Power State: Power State #0 00:29:23.005 Power State #0: 00:29:23.005 Max Power: 0.00 W 00:29:23.005 Non-Operational State: Operational 00:29:23.005 Entry Latency: Not Reported 00:29:23.005 Exit Latency: Not Reported 00:29:23.005 Relative Read Throughput: 0 00:29:23.005 Relative Read Latency: 0 00:29:23.005 Relative Write Throughput: 0 00:29:23.005 Relative Write Latency: 0 00:29:23.005 Idle Power: Not Reported 00:29:23.005 Active Power: Not Reported 00:29:23.005 Non-Operational Permissive Mode: Not Supported 00:29:23.005 00:29:23.005 Health Information 00:29:23.005 ================== 00:29:23.005 Critical Warnings: 00:29:23.005 Available Spare Space: OK 00:29:23.005 Temperature: OK 00:29:23.005 Device Reliability: OK 00:29:23.005 Read Only: No 00:29:23.005 Volatile Memory Backup: OK 00:29:23.005 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:23.005 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:23.005 Available Spare: 0% 00:29:23.005 Available Spare Threshold: 0% 00:29:23.005 Life Percentage Used:[2024-11-17 18:50:09.469250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.005 [2024-11-17 18:50:09.469262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1631d80) 00:29:23.005 [2024-11-17 18:50:09.469273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.005 [2024-11-17 18:50:09.469294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169df00, cid 7, qid 0 00:29:23.005 [2024-11-17 18:50:09.469423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.005 [2024-11-17 18:50:09.469439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.005 [2024-11-17 18:50:09.469447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.005 [2024-11-17 18:50:09.469454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169df00) on tqpair=0x1631d80 00:29:23.005 [2024-11-17 18:50:09.469500] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:23.005 [2024-11-17 18:50:09.469520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d480) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.469530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.006 [2024-11-17 18:50:09.469539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d600) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.469546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.006 [2024-11-17 18:50:09.469554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d780) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.469562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.006 [2024-11-17 18:50:09.469570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.469577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.006 [2024-11-17 18:50:09.469589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.469598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.469604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.006 [2024-11-17 18:50:09.469614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.006 [2024-11-17 18:50:09.469636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.006 [2024-11-17 18:50:09.469731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.006 [2024-11-17 18:50:09.469747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.006 [2024-11-17 18:50:09.469754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.469761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.469772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.469779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.469786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.006 [2024-11-17 18:50:09.469796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.006 [2024-11-17 18:50:09.469822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.006 [2024-11-17 18:50:09.469910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.006 [2024-11-17 18:50:09.469922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.006 [2024-11-17 18:50:09.469929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.469936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.469943] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:23.006 [2024-11-17 18:50:09.469951] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:23.006 [2024-11-17 18:50:09.469966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.469975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.469981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.006 [2024-11-17 18:50:09.469995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.006 [2024-11-17 18:50:09.470017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.006 [2024-11-17 18:50:09.470093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.006 [2024-11-17 18:50:09.470106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.006 [2024-11-17 18:50:09.470112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.470135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.006 [2024-11-17 18:50:09.470160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.006 [2024-11-17 18:50:09.470180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.006 [2024-11-17 18:50:09.470254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.006 [2024-11-17 18:50:09.470268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.006 [2024-11-17 18:50:09.470274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.470297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.006 [2024-11-17 18:50:09.470323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.006 [2024-11-17 18:50:09.470343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.006 [2024-11-17 18:50:09.470437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.006 [2024-11-17 18:50:09.470450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.006 [2024-11-17 18:50:09.470457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.470480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.006 [2024-11-17 18:50:09.470505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.006 [2024-11-17 18:50:09.470526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.006 [2024-11-17 18:50:09.470599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.006 [2024-11-17 18:50:09.470611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.006 [2024-11-17 18:50:09.470618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.470640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.006 [2024-11-17 18:50:09.470666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.006 [2024-11-17 18:50:09.470699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.006 [2024-11-17 18:50:09.470771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.006 [2024-11-17 18:50:09.470783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.006 [2024-11-17 18:50:09.470790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.470812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.006 [2024-11-17 18:50:09.470838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.006 [2024-11-17 18:50:09.470858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.006 [2024-11-17 18:50:09.470934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.006 [2024-11-17 18:50:09.470947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.006 [2024-11-17 18:50:09.470953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.470975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.470991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.006 [2024-11-17 18:50:09.471001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.006 [2024-11-17 18:50:09.471021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.006 [2024-11-17 18:50:09.471090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.006 [2024-11-17 18:50:09.471102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.006 [2024-11-17 18:50:09.471108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.471115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.471130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.471139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.471146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.006 [2024-11-17 18:50:09.471156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.006 [2024-11-17 18:50:09.471176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.006 [2024-11-17 18:50:09.471246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.006 [2024-11-17 18:50:09.471258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.006 [2024-11-17 18:50:09.471265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.471271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.006 [2024-11-17 18:50:09.471287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.471296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.006 [2024-11-17 18:50:09.471302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.006 [2024-11-17 18:50:09.471312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.006 [2024-11-17 18:50:09.471336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.007 [2024-11-17 18:50:09.471407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.007 [2024-11-17 18:50:09.471419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.007 [2024-11-17 18:50:09.471425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.007 [2024-11-17 18:50:09.471432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.007 [2024-11-17 18:50:09.471448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.007 [2024-11-17 18:50:09.471457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.007 [2024-11-17 18:50:09.471463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.007 [2024-11-17 18:50:09.471473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.007 [2024-11-17 18:50:09.471493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.007 [2024-11-17 18:50:09.471570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.007 [2024-11-17 18:50:09.471583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.007 [2024-11-17 18:50:09.471590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.007 [2024-11-17 18:50:09.471597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.007 [2024-11-17 18:50:09.471612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.007 [2024-11-17 18:50:09.471622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.007 [2024-11-17 18:50:09.471628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.007 [2024-11-17 18:50:09.471638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.007 [2024-11-17 18:50:09.471658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.007 [2024-11-17 18:50:09.475690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.007 [2024-11-17 18:50:09.475706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.007 [2024-11-17 18:50:09.475713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.007 [2024-11-17 18:50:09.475720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.007 [2024-11-17 18:50:09.475751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.007 [2024-11-17 18:50:09.475760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.007 [2024-11-17 18:50:09.475767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1631d80) 00:29:23.007 [2024-11-17 18:50:09.475777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.007 [2024-11-17 18:50:09.475800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x169d900, cid 3, qid 0 00:29:23.007 [2024-11-17 18:50:09.475907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.007 [2024-11-17 18:50:09.475921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.007 [2024-11-17 18:50:09.475928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.007 [2024-11-17 18:50:09.475935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x169d900) on tqpair=0x1631d80 00:29:23.007 [2024-11-17 18:50:09.475948] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:29:23.007 0% 00:29:23.007 Data Units Read: 0 00:29:23.007 Data Units Written: 0 00:29:23.007 Host Read Commands: 0 00:29:23.007 Host Write Commands: 0 00:29:23.007 Controller Busy Time: 0 minutes 00:29:23.007 Power Cycles: 0 00:29:23.007 Power On Hours: 0 hours 00:29:23.007 Unsafe Shutdowns: 0 00:29:23.007 Unrecoverable Media Errors: 0 00:29:23.007 Lifetime Error Log Entries: 0 00:29:23.007 Warning Temperature Time: 0 minutes 00:29:23.007 Critical Temperature Time: 0 minutes 00:29:23.007 00:29:23.007 Number of Queues 00:29:23.007 ================ 00:29:23.007 Number of I/O Submission Queues: 127 00:29:23.007 Number of I/O Completion Queues: 127 00:29:23.007 00:29:23.007 Active Namespaces 00:29:23.007 ================= 00:29:23.007 Namespace ID:1 00:29:23.007 Error Recovery Timeout: Unlimited 00:29:23.007 Command Set Identifier: NVM (00h) 00:29:23.007 Deallocate: Supported 00:29:23.007 Deallocated/Unwritten Error: Not Supported 00:29:23.007 Deallocated Read Value: Unknown 00:29:23.007 Deallocate in Write Zeroes: Not Supported 00:29:23.007 Deallocated Guard Field: 0xFFFF 00:29:23.007 Flush: Supported 00:29:23.007 Reservation: Supported 00:29:23.007 Namespace Sharing Capabilities: Multiple Controllers 00:29:23.007 Size (in LBAs): 131072 (0GiB) 00:29:23.007 Capacity (in LBAs): 131072 (0GiB) 00:29:23.007 Utilization (in LBAs): 131072 (0GiB) 00:29:23.007 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:23.007 EUI64: ABCDEF0123456789 00:29:23.007 UUID: ef7ad289-d262-41a8-8a59-0847bca165a5 00:29:23.007 Thin Provisioning: Not Supported 00:29:23.007 Per-NS Atomic Units: Yes 00:29:23.007 Atomic Boundary Size (Normal): 0 00:29:23.007 Atomic Boundary Size (PFail): 0 00:29:23.007 Atomic Boundary Offset: 0 00:29:23.007 Maximum Single Source Range Length: 65535 00:29:23.007 Maximum Copy Length: 65535 00:29:23.007 Maximum Source Range Count: 1 00:29:23.007 NGUID/EUI64 Never Reused: No 00:29:23.007 Namespace Write Protected: No 00:29:23.007 Number of LBA Formats: 1 00:29:23.007 Current LBA Format: LBA Format #00 00:29:23.007 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:23.007 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:23.007 rmmod nvme_tcp 00:29:23.007 rmmod nvme_fabrics 00:29:23.007 rmmod nvme_keyring 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 829756 ']' 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 829756 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 829756 ']' 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 829756 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.007 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 829756 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 829756' 00:29:23.266 killing process with pid 829756 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 829756 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 829756 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.266 18:50:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.804 18:50:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:25.804 00:29:25.804 real 0m5.648s 00:29:25.804 user 0m4.399s 00:29:25.804 sys 0m2.044s 00:29:25.804 18:50:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.804 18:50:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:25.804 ************************************ 00:29:25.804 END TEST nvmf_identify 00:29:25.804 ************************************ 00:29:25.804 18:50:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:25.804 18:50:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:25.804 18:50:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.804 18:50:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.804 ************************************ 00:29:25.804 START TEST nvmf_perf 00:29:25.804 ************************************ 00:29:25.804 18:50:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:25.804 * Looking for test storage... 00:29:25.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.804 18:50:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:25.804 18:50:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:25.804 18:50:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.804 --rc genhtml_branch_coverage=1 00:29:25.804 --rc genhtml_function_coverage=1 00:29:25.804 --rc genhtml_legend=1 00:29:25.804 --rc geninfo_all_blocks=1 00:29:25.804 --rc geninfo_unexecuted_blocks=1 00:29:25.804 00:29:25.804 ' 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.804 --rc genhtml_branch_coverage=1 00:29:25.804 --rc genhtml_function_coverage=1 00:29:25.804 --rc genhtml_legend=1 00:29:25.804 --rc geninfo_all_blocks=1 00:29:25.804 --rc geninfo_unexecuted_blocks=1 00:29:25.804 00:29:25.804 ' 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.804 --rc genhtml_branch_coverage=1 00:29:25.804 --rc genhtml_function_coverage=1 00:29:25.804 --rc genhtml_legend=1 00:29:25.804 --rc geninfo_all_blocks=1 00:29:25.804 --rc geninfo_unexecuted_blocks=1 00:29:25.804 00:29:25.804 ' 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.804 --rc genhtml_branch_coverage=1 00:29:25.804 --rc genhtml_function_coverage=1 00:29:25.804 --rc genhtml_legend=1 00:29:25.804 --rc geninfo_all_blocks=1 00:29:25.804 --rc geninfo_unexecuted_blocks=1 00:29:25.804 00:29:25.804 ' 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.804 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:25.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:25.805 18:50:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:27.708 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:27.708 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:27.708 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.708 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:27.709 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.709 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:27.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:29:27.967 00:29:27.967 --- 10.0.0.2 ping statistics --- 00:29:27.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.967 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:29:27.967 00:29:27.967 --- 10.0.0.1 ping statistics --- 00:29:27.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.967 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:27.967 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=831845 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 831845 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 831845 ']' 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.968 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:27.968 [2024-11-17 18:50:14.456073] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:29:27.968 [2024-11-17 18:50:14.456142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.968 [2024-11-17 18:50:14.531143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:28.226 [2024-11-17 18:50:14.580331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.226 [2024-11-17 18:50:14.580390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.226 [2024-11-17 18:50:14.580412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.226 [2024-11-17 18:50:14.580428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.226 [2024-11-17 18:50:14.580444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.226 [2024-11-17 18:50:14.582083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.226 [2024-11-17 18:50:14.582109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.226 [2024-11-17 18:50:14.582171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.226 [2024-11-17 18:50:14.582174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.226 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.226 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:28.226 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.226 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.226 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:28.226 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.226 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:28.226 18:50:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:31.503 18:50:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:31.503 18:50:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:31.760 18:50:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:31.760 18:50:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:32.018 18:50:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:32.018 18:50:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:32.018 18:50:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:32.018 18:50:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:32.018 18:50:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:32.276 [2024-11-17 18:50:18.744060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.276 18:50:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:32.533 18:50:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:32.533 18:50:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:32.790 18:50:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:32.791 18:50:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:33.048 18:50:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.306 [2024-11-17 18:50:19.828143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.306 18:50:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:33.563 18:50:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:33.563 18:50:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:33.563 18:50:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:33.563 18:50:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:34.935 Initializing NVMe Controllers 00:29:34.935 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:34.935 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:34.935 Initialization complete. Launching workers. 00:29:34.935 ======================================================== 00:29:34.935 Latency(us) 00:29:34.935 Device Information : IOPS MiB/s Average min max 00:29:34.935 PCIE (0000:88:00.0) NSID 1 from core 0: 85615.02 334.43 373.23 33.04 6257.25 00:29:34.935 ======================================================== 00:29:34.935 Total : 85615.02 334.43 373.23 33.04 6257.25 00:29:34.935 00:29:34.935 18:50:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.306 Initializing NVMe Controllers 00:29:36.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:36.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:36.306 Initialization complete. Launching workers. 00:29:36.306 ======================================================== 00:29:36.306 Latency(us) 00:29:36.306 Device Information : IOPS MiB/s Average min max 00:29:36.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 114.94 0.45 8977.34 141.96 45861.83 00:29:36.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.97 0.20 19393.46 7950.51 47926.00 00:29:36.306 ======================================================== 00:29:36.306 Total : 166.91 0.65 12220.68 141.96 47926.00 00:29:36.306 00:29:36.306 18:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:37.680 Initializing NVMe Controllers 00:29:37.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:37.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:37.680 Initialization complete. Launching workers. 00:29:37.680 ======================================================== 00:29:37.680 Latency(us) 00:29:37.680 Device Information : IOPS MiB/s Average min max 00:29:37.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8513.11 33.25 3760.07 769.38 10962.00 00:29:37.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3789.92 14.80 8535.00 6748.75 47791.40 00:29:37.680 ======================================================== 00:29:37.680 Total : 12303.03 48.06 5230.98 769.38 47791.40 00:29:37.680 00:29:37.680 18:50:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:37.680 18:50:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:37.680 18:50:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:40.209 Initializing NVMe Controllers 00:29:40.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.209 Controller IO queue size 128, less than required. 00:29:40.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.209 Controller IO queue size 128, less than required. 00:29:40.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:40.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:40.209 Initialization complete. Launching workers. 00:29:40.209 ======================================================== 00:29:40.209 Latency(us) 00:29:40.209 Device Information : IOPS MiB/s Average min max 00:29:40.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1654.43 413.61 79379.08 56342.55 137081.21 00:29:40.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 570.98 142.74 233630.84 78073.26 359843.68 00:29:40.209 ======================================================== 00:29:40.209 Total : 2225.41 556.35 118955.69 56342.55 359843.68 00:29:40.209 00:29:40.209 18:50:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:40.209 No valid NVMe controllers or AIO or URING devices found 00:29:40.209 Initializing NVMe Controllers 00:29:40.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.209 Controller IO queue size 128, less than required. 00:29:40.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.209 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:40.209 Controller IO queue size 128, less than required. 00:29:40.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.209 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:40.209 WARNING: Some requested NVMe devices were skipped 00:29:40.210 18:50:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:43.489 Initializing NVMe Controllers 00:29:43.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.489 Controller IO queue size 128, less than required. 00:29:43.489 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.489 Controller IO queue size 128, less than required. 00:29:43.489 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:43.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:43.489 Initialization complete. Launching workers. 00:29:43.489 00:29:43.489 ==================== 00:29:43.489 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:43.489 TCP transport: 00:29:43.489 polls: 9828 00:29:43.489 idle_polls: 6724 00:29:43.489 sock_completions: 3104 00:29:43.489 nvme_completions: 6111 00:29:43.489 submitted_requests: 9238 00:29:43.489 queued_requests: 1 00:29:43.489 00:29:43.489 ==================== 00:29:43.489 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:43.489 TCP transport: 00:29:43.489 polls: 13441 00:29:43.489 idle_polls: 9790 00:29:43.489 sock_completions: 3651 00:29:43.489 nvme_completions: 6371 00:29:43.489 submitted_requests: 9530 00:29:43.489 queued_requests: 1 00:29:43.489 ======================================================== 00:29:43.489 Latency(us) 00:29:43.489 Device Information : IOPS MiB/s Average min max 00:29:43.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1525.51 381.38 85667.21 55494.88 149157.73 00:29:43.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1590.43 397.61 81405.49 41323.50 120633.66 00:29:43.489 ======================================================== 00:29:43.489 Total : 3115.94 778.99 83491.95 41323.50 149157.73 00:29:43.489 00:29:43.489 18:50:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:43.489 18:50:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.489 18:50:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:43.489 18:50:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:29:43.489 18:50:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:46.767 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=33f071b8-35ef-4566-9b37-99fce08e688f 00:29:46.767 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 33f071b8-35ef-4566-9b37-99fce08e688f 00:29:46.767 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=33f071b8-35ef-4566-9b37-99fce08e688f 00:29:46.767 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:46.767 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:29:46.767 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:29:46.767 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:47.332 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:47.332 { 00:29:47.332 "uuid": "33f071b8-35ef-4566-9b37-99fce08e688f", 00:29:47.332 "name": "lvs_0", 00:29:47.332 "base_bdev": "Nvme0n1", 00:29:47.332 "total_data_clusters": 238234, 00:29:47.332 "free_clusters": 238234, 00:29:47.332 "block_size": 512, 00:29:47.332 "cluster_size": 4194304 00:29:47.332 } 00:29:47.332 ]' 00:29:47.332 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="33f071b8-35ef-4566-9b37-99fce08e688f") .free_clusters' 00:29:47.332 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:29:47.332 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="33f071b8-35ef-4566-9b37-99fce08e688f") .cluster_size' 00:29:47.332 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:47.332 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:29:47.332 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:29:47.332 952936 00:29:47.332 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:47.332 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:47.332 18:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 33f071b8-35ef-4566-9b37-99fce08e688f lbd_0 20480 00:29:47.589 18:50:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=f1e1e307-3100-4d1a-a664-183f7749f40a 00:29:47.589 18:50:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore f1e1e307-3100-4d1a-a664-183f7749f40a lvs_n_0 00:29:48.522 18:50:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=560b980b-d8f7-468e-8021-cef067aae4c1 00:29:48.522 18:50:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 560b980b-d8f7-468e-8021-cef067aae4c1 00:29:48.522 18:50:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=560b980b-d8f7-468e-8021-cef067aae4c1 00:29:48.522 18:50:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:29:48.522 18:50:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:29:48.522 18:50:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:29:48.522 18:50:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:48.780 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:29:48.780 { 00:29:48.780 "uuid": "33f071b8-35ef-4566-9b37-99fce08e688f", 00:29:48.780 "name": "lvs_0", 00:29:48.780 "base_bdev": "Nvme0n1", 00:29:48.780 "total_data_clusters": 238234, 00:29:48.781 "free_clusters": 233114, 00:29:48.781 "block_size": 512, 00:29:48.781 "cluster_size": 4194304 00:29:48.781 }, 00:29:48.781 { 00:29:48.781 "uuid": "560b980b-d8f7-468e-8021-cef067aae4c1", 00:29:48.781 "name": "lvs_n_0", 00:29:48.781 "base_bdev": "f1e1e307-3100-4d1a-a664-183f7749f40a", 00:29:48.781 "total_data_clusters": 5114, 00:29:48.781 "free_clusters": 5114, 00:29:48.781 "block_size": 512, 00:29:48.781 "cluster_size": 4194304 00:29:48.781 } 00:29:48.781 ]' 00:29:48.781 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="560b980b-d8f7-468e-8021-cef067aae4c1") .free_clusters' 00:29:48.781 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:29:48.781 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="560b980b-d8f7-468e-8021-cef067aae4c1") .cluster_size' 00:29:48.781 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:29:48.781 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:29:48.781 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:29:48.781 20456 00:29:48.781 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:48.781 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 560b980b-d8f7-468e-8021-cef067aae4c1 lbd_nest_0 20456 00:29:49.090 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=5c97214b-31d3-4e83-b3cc-69171115ac02 00:29:49.090 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.366 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:49.366 18:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5c97214b-31d3-4e83-b3cc-69171115ac02 00:29:49.624 18:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.882 18:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:49.882 18:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:49.882 18:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:49.882 18:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:49.882 18:50:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:02.076 Initializing NVMe Controllers 00:30:02.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:02.076 Initialization complete. Launching workers. 00:30:02.076 ======================================================== 00:30:02.076 Latency(us) 00:30:02.076 Device Information : IOPS MiB/s Average min max 00:30:02.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.78 0.02 22404.64 170.07 45050.81 00:30:02.076 ======================================================== 00:30:02.076 Total : 44.78 0.02 22404.64 170.07 45050.81 00:30:02.076 00:30:02.076 18:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:02.076 18:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:12.033 Initializing NVMe Controllers 00:30:12.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:12.033 Initialization complete. Launching workers. 00:30:12.033 ======================================================== 00:30:12.033 Latency(us) 00:30:12.033 Device Information : IOPS MiB/s Average min max 00:30:12.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.90 8.99 13930.50 5968.12 47898.92 00:30:12.033 ======================================================== 00:30:12.033 Total : 71.90 8.99 13930.50 5968.12 47898.92 00:30:12.033 00:30:12.033 18:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:12.033 18:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:12.033 18:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:21.994 Initializing NVMe Controllers 00:30:21.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:21.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:21.994 Initialization complete. Launching workers. 00:30:21.994 ======================================================== 00:30:21.994 Latency(us) 00:30:21.994 Device Information : IOPS MiB/s Average min max 00:30:21.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7729.88 3.77 4139.39 293.29 12041.51 00:30:21.994 ======================================================== 00:30:21.994 Total : 7729.88 3.77 4139.39 293.29 12041.51 00:30:21.994 00:30:21.994 18:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:21.994 18:51:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.960 Initializing NVMe Controllers 00:30:31.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:31.960 Initialization complete. Launching workers. 00:30:31.960 ======================================================== 00:30:31.960 Latency(us) 00:30:31.960 Device Information : IOPS MiB/s Average min max 00:30:31.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3881.45 485.18 8247.37 783.53 17925.08 00:30:31.960 ======================================================== 00:30:31.960 Total : 3881.45 485.18 8247.37 783.53 17925.08 00:30:31.960 00:30:31.960 18:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:31.960 18:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:31.960 18:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.925 Initializing NVMe Controllers 00:30:41.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.925 Controller IO queue size 128, less than required. 00:30:41.925 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:41.925 Initialization complete. Launching workers. 00:30:41.925 ======================================================== 00:30:41.925 Latency(us) 00:30:41.925 Device Information : IOPS MiB/s Average min max 00:30:41.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11820.26 5.77 10830.21 1714.84 23105.82 00:30:41.926 ======================================================== 00:30:41.926 Total : 11820.26 5.77 10830.21 1714.84 23105.82 00:30:41.926 00:30:41.926 18:51:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:41.926 18:51:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:54.123 Initializing NVMe Controllers 00:30:54.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:54.123 Controller IO queue size 128, less than required. 00:30:54.123 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:54.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:54.123 Initialization complete. Launching workers. 00:30:54.123 ======================================================== 00:30:54.123 Latency(us) 00:30:54.123 Device Information : IOPS MiB/s Average min max 00:30:54.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1194.74 149.34 107664.39 23705.37 214770.55 00:30:54.123 ======================================================== 00:30:54.123 Total : 1194.74 149.34 107664.39 23705.37 214770.55 00:30:54.123 00:30:54.123 18:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:54.123 18:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5c97214b-31d3-4e83-b3cc-69171115ac02 00:30:54.123 18:51:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:54.123 18:51:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f1e1e307-3100-4d1a-a664-183f7749f40a 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:54.123 rmmod nvme_tcp 00:30:54.123 rmmod nvme_fabrics 00:30:54.123 rmmod nvme_keyring 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 831845 ']' 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 831845 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 831845 ']' 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 831845 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 831845 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 831845' 00:30:54.123 killing process with pid 831845 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 831845 00:30:54.123 18:51:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 831845 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.024 18:51:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.933 00:30:57.933 real 1m32.374s 00:30:57.933 user 5m42.264s 00:30:57.933 sys 0m15.764s 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:57.933 ************************************ 00:30:57.933 END TEST nvmf_perf 00:30:57.933 ************************************ 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.933 ************************************ 00:30:57.933 START TEST nvmf_fio_host 00:30:57.933 ************************************ 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:57.933 * Looking for test storage... 00:30:57.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.933 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:57.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.934 --rc genhtml_branch_coverage=1 00:30:57.934 --rc genhtml_function_coverage=1 00:30:57.934 --rc genhtml_legend=1 00:30:57.934 --rc geninfo_all_blocks=1 00:30:57.934 --rc geninfo_unexecuted_blocks=1 00:30:57.934 00:30:57.934 ' 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:57.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.934 --rc genhtml_branch_coverage=1 00:30:57.934 --rc genhtml_function_coverage=1 00:30:57.934 --rc genhtml_legend=1 00:30:57.934 --rc geninfo_all_blocks=1 00:30:57.934 --rc geninfo_unexecuted_blocks=1 00:30:57.934 00:30:57.934 ' 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:57.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.934 --rc genhtml_branch_coverage=1 00:30:57.934 --rc genhtml_function_coverage=1 00:30:57.934 --rc genhtml_legend=1 00:30:57.934 --rc geninfo_all_blocks=1 00:30:57.934 --rc geninfo_unexecuted_blocks=1 00:30:57.934 00:30:57.934 ' 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:57.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.934 --rc genhtml_branch_coverage=1 00:30:57.934 --rc genhtml_function_coverage=1 00:30:57.934 --rc genhtml_legend=1 00:30:57.934 --rc geninfo_all_blocks=1 00:30:57.934 --rc geninfo_unexecuted_blocks=1 00:30:57.934 00:30:57.934 ' 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.934 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:57.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:57.935 18:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:00.468 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:00.468 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:00.468 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:00.469 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:00.469 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:00.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:31:00.469 00:31:00.469 --- 10.0.0.2 ping statistics --- 00:31:00.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.469 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:31:00.469 00:31:00.469 --- 10.0.0.1 ping statistics --- 00:31:00.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.469 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=844562 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 844562 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 844562 ']' 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.469 18:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.469 [2024-11-17 18:51:46.803751] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:31:00.469 [2024-11-17 18:51:46.803824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.469 [2024-11-17 18:51:46.881091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:00.469 [2024-11-17 18:51:46.928146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.469 [2024-11-17 18:51:46.928211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.469 [2024-11-17 18:51:46.928225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.469 [2024-11-17 18:51:46.928242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.469 [2024-11-17 18:51:46.928251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.469 [2024-11-17 18:51:46.929856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.469 [2024-11-17 18:51:46.929911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.469 [2024-11-17 18:51:46.929959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:00.469 [2024-11-17 18:51:46.929962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.727 18:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.727 18:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:00.727 18:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:00.727 [2024-11-17 18:51:47.298900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.985 18:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:00.985 18:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:00.985 18:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.985 18:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:01.243 Malloc1 00:31:01.243 18:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:01.500 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:01.758 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.016 [2024-11-17 18:51:48.554876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.016 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:02.274 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:02.532 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:02.532 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:02.532 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.532 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:02.532 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:02.532 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:02.532 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:02.532 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:02.532 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:02.532 18:51:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:02.532 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:02.532 fio-3.35 00:31:02.532 Starting 1 thread 00:31:05.066 00:31:05.066 test: (groupid=0, jobs=1): err= 0: pid=845033: Sun Nov 17 18:51:51 2024 00:31:05.066 read: IOPS=8775, BW=34.3MiB/s (35.9MB/s)(68.8MiB/2007msec) 00:31:05.066 slat (nsec): min=1955, max=164745, avg=2554.08, stdev=1949.73 00:31:05.066 clat (usec): min=2596, max=13865, avg=7929.90, stdev=667.08 00:31:05.066 lat (usec): min=2629, max=13867, avg=7932.45, stdev=666.97 00:31:05.066 clat percentiles (usec): 00:31:05.066 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7373], 00:31:05.066 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8094], 00:31:05.066 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:31:05.066 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[11076], 99.95th=[12911], 00:31:05.066 | 99.99th=[13698] 00:31:05.066 bw ( KiB/s): min=33936, max=35632, per=100.00%, avg=35110.00, stdev=794.58, samples=4 00:31:05.066 iops : min= 8484, max= 8908, avg=8777.50, stdev=198.64, samples=4 00:31:05.066 write: IOPS=8783, BW=34.3MiB/s (36.0MB/s)(68.9MiB/2007msec); 0 zone resets 00:31:05.066 slat (usec): min=2, max=136, avg= 2.72, stdev= 1.52 00:31:05.066 clat (usec): min=1420, max=13009, avg=6583.89, stdev=572.16 00:31:05.066 lat (usec): min=1429, max=13011, avg=6586.61, stdev=572.12 00:31:05.066 clat percentiles (usec): 00:31:05.066 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6194], 00:31:05.066 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:31:05.066 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7373], 00:31:05.066 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[12256], 99.95th=[12780], 00:31:05.066 | 99.99th=[13042] 00:31:05.066 bw ( KiB/s): min=34752, max=35520, per=99.95%, avg=35118.00, stdev=382.91, samples=4 00:31:05.066 iops : min= 8688, max= 8880, avg=8779.50, stdev=95.73, samples=4 00:31:05.066 lat (msec) : 2=0.02%, 4=0.09%, 10=99.72%, 20=0.17% 00:31:05.066 cpu : usr=63.91%, sys=34.45%, ctx=87, majf=0, minf=36 00:31:05.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:05.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:05.066 issued rwts: total=17612,17629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.066 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:05.066 00:31:05.066 Run status group 0 (all jobs): 00:31:05.066 READ: bw=34.3MiB/s (35.9MB/s), 34.3MiB/s-34.3MiB/s (35.9MB/s-35.9MB/s), io=68.8MiB (72.1MB), run=2007-2007msec 00:31:05.066 WRITE: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=68.9MiB (72.2MB), run=2007-2007msec 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:05.066 18:51:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:05.066 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:05.066 fio-3.35 00:31:05.066 Starting 1 thread 00:31:07.667 00:31:07.667 test: (groupid=0, jobs=1): err= 0: pid=845379: Sun Nov 17 18:51:54 2024 00:31:07.667 read: IOPS=8267, BW=129MiB/s (135MB/s)(259MiB/2008msec) 00:31:07.667 slat (usec): min=2, max=105, avg= 3.52, stdev= 1.87 00:31:07.667 clat (usec): min=2224, max=16838, avg=8776.66, stdev=2039.68 00:31:07.667 lat (usec): min=2227, max=16841, avg=8780.18, stdev=2039.68 00:31:07.667 clat percentiles (usec): 00:31:07.667 | 1.00th=[ 4686], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7046], 00:31:07.667 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9110], 00:31:07.667 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[11469], 95.00th=[12387], 00:31:07.667 | 99.00th=[14091], 99.50th=[15401], 99.90th=[16581], 99.95th=[16581], 00:31:07.667 | 99.99th=[16712] 00:31:07.667 bw ( KiB/s): min=62720, max=77344, per=52.27%, avg=69144.00, stdev=6833.00, samples=4 00:31:07.667 iops : min= 3920, max= 4834, avg=4321.50, stdev=427.06, samples=4 00:31:07.667 write: IOPS=4914, BW=76.8MiB/s (80.5MB/s)(141MiB/1842msec); 0 zone resets 00:31:07.667 slat (usec): min=30, max=147, avg=33.78, stdev= 5.72 00:31:07.667 clat (usec): min=2727, max=19592, avg=11572.43, stdev=1883.45 00:31:07.667 lat (usec): min=2759, max=19624, avg=11606.21, stdev=1883.45 00:31:07.667 clat percentiles (usec): 00:31:07.667 | 1.00th=[ 7570], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9896], 00:31:07.667 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:31:07.667 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14222], 95.00th=[14877], 00:31:07.667 | 99.00th=[15926], 99.50th=[16188], 99.90th=[17171], 99.95th=[19268], 00:31:07.667 | 99.99th=[19530] 00:31:07.667 bw ( KiB/s): min=65440, max=80352, per=91.63%, avg=72056.00, stdev=7201.70, samples=4 00:31:07.667 iops : min= 4090, max= 5022, avg=4503.50, stdev=450.11, samples=4 00:31:07.667 lat (msec) : 4=0.24%, 10=56.09%, 20=43.67% 00:31:07.667 cpu : usr=75.83%, sys=22.92%, ctx=56, majf=0, minf=58 00:31:07.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:07.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:07.667 issued rwts: total=16601,9053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.667 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:07.667 00:31:07.667 Run status group 0 (all jobs): 00:31:07.667 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2008-2008msec 00:31:07.667 WRITE: bw=76.8MiB/s (80.5MB/s), 76.8MiB/s-76.8MiB/s (80.5MB/s-80.5MB/s), io=141MiB (148MB), run=1842-1842msec 00:31:07.667 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.925 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:07.925 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:07.925 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:07.925 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:07.925 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:07.925 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:07.925 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:07.925 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:07.925 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:07.925 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:07.925 18:51:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:11.204 Nvme0n1 00:31:11.204 18:51:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=ca0cf7d2-ffbc-4c26-bcdb-a0d97fe6d105 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb ca0cf7d2-ffbc-4c26-bcdb-a0d97fe6d105 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=ca0cf7d2-ffbc-4c26-bcdb-a0d97fe6d105 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:14.483 { 00:31:14.483 "uuid": "ca0cf7d2-ffbc-4c26-bcdb-a0d97fe6d105", 00:31:14.483 "name": "lvs_0", 00:31:14.483 "base_bdev": "Nvme0n1", 00:31:14.483 "total_data_clusters": 930, 00:31:14.483 "free_clusters": 930, 00:31:14.483 "block_size": 512, 00:31:14.483 "cluster_size": 1073741824 00:31:14.483 } 00:31:14.483 ]' 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="ca0cf7d2-ffbc-4c26-bcdb-a0d97fe6d105") .free_clusters' 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="ca0cf7d2-ffbc-4c26-bcdb-a0d97fe6d105") .cluster_size' 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:14.483 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:14.483 952320 00:31:14.484 18:52:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:14.741 f66168d9-f0d0-430d-88eb-4dfe0146fba5 00:31:14.741 18:52:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:14.999 18:52:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:15.257 18:52:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:15.515 18:52:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:15.772 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:15.772 fio-3.35 00:31:15.772 Starting 1 thread 00:31:18.299 00:31:18.299 test: (groupid=0, jobs=1): err= 0: pid=846679: Sun Nov 17 18:52:04 2024 00:31:18.299 read: IOPS=5877, BW=23.0MiB/s (24.1MB/s)(46.1MiB/2008msec) 00:31:18.299 slat (usec): min=2, max=174, avg= 2.67, stdev= 2.36 00:31:18.299 clat (usec): min=818, max=171438, avg=11845.02, stdev=11724.94 00:31:18.299 lat (usec): min=821, max=171492, avg=11847.69, stdev=11725.35 00:31:18.299 clat percentiles (msec): 00:31:18.299 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:31:18.299 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:31:18.299 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:31:18.299 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:18.299 | 99.99th=[ 171] 00:31:18.299 bw ( KiB/s): min=16448, max=25848, per=99.86%, avg=23478.00, stdev=4686.72, samples=4 00:31:18.299 iops : min= 4112, max= 6462, avg=5869.50, stdev=1171.68, samples=4 00:31:18.299 write: IOPS=5870, BW=22.9MiB/s (24.0MB/s)(46.0MiB/2008msec); 0 zone resets 00:31:18.299 slat (usec): min=2, max=141, avg= 2.79, stdev= 1.90 00:31:18.299 clat (usec): min=281, max=169430, avg=9778.93, stdev=11002.51 00:31:18.299 lat (usec): min=284, max=169435, avg=9781.71, stdev=11002.85 00:31:18.299 clat percentiles (msec): 00:31:18.299 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:31:18.299 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:31:18.299 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:31:18.299 | 99.00th=[ 12], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 00:31:18.299 | 99.99th=[ 169] 00:31:18.299 bw ( KiB/s): min=17448, max=25552, per=99.86%, avg=23450.00, stdev=4003.52, samples=4 00:31:18.299 iops : min= 4362, max= 6388, avg=5862.50, stdev=1000.88, samples=4 00:31:18.299 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:18.299 lat (msec) : 2=0.02%, 4=0.12%, 10=52.82%, 20=46.47%, 250=0.54% 00:31:18.299 cpu : usr=59.19%, sys=39.36%, ctx=105, majf=0, minf=36 00:31:18.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:18.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:18.299 issued rwts: total=11803,11788,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.299 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:18.299 00:31:18.299 Run status group 0 (all jobs): 00:31:18.299 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.1MiB (48.3MB), run=2008-2008msec 00:31:18.299 WRITE: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=46.0MiB (48.3MB), run=2008-2008msec 00:31:18.299 18:52:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:18.558 18:52:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:19.930 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=fc2aee9e-47af-42ab-9c74-154044ab93a5 00:31:19.930 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb fc2aee9e-47af-42ab-9c74-154044ab93a5 00:31:19.930 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=fc2aee9e-47af-42ab-9c74-154044ab93a5 00:31:19.930 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:19.930 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:19.930 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:19.930 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:20.189 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:20.189 { 00:31:20.189 "uuid": "ca0cf7d2-ffbc-4c26-bcdb-a0d97fe6d105", 00:31:20.189 "name": "lvs_0", 00:31:20.189 "base_bdev": "Nvme0n1", 00:31:20.189 "total_data_clusters": 930, 00:31:20.189 "free_clusters": 0, 00:31:20.189 "block_size": 512, 00:31:20.189 "cluster_size": 1073741824 00:31:20.189 }, 00:31:20.189 { 00:31:20.189 "uuid": "fc2aee9e-47af-42ab-9c74-154044ab93a5", 00:31:20.189 "name": "lvs_n_0", 00:31:20.189 "base_bdev": "f66168d9-f0d0-430d-88eb-4dfe0146fba5", 00:31:20.189 "total_data_clusters": 237847, 00:31:20.189 "free_clusters": 237847, 00:31:20.189 "block_size": 512, 00:31:20.189 "cluster_size": 4194304 00:31:20.189 } 00:31:20.189 ]' 00:31:20.189 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="fc2aee9e-47af-42ab-9c74-154044ab93a5") .free_clusters' 00:31:20.189 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:20.189 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="fc2aee9e-47af-42ab-9c74-154044ab93a5") .cluster_size' 00:31:20.189 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:20.189 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:20.189 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:20.189 951388 00:31:20.189 18:52:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:20.755 97d8d2ac-78b7-4a02-b4e8-aea83efa973e 00:31:21.013 18:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:21.270 18:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:21.528 18:52:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:21.789 18:52:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:22.049 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:22.049 fio-3.35 00:31:22.049 Starting 1 thread 00:31:24.576 00:31:24.576 test: (groupid=0, jobs=1): err= 0: pid=847516: Sun Nov 17 18:52:10 2024 00:31:24.576 read: IOPS=5289, BW=20.7MiB/s (21.7MB/s)(41.5MiB/2009msec) 00:31:24.576 slat (usec): min=2, max=120, avg= 2.77, stdev= 2.18 00:31:24.576 clat (usec): min=4676, max=20121, avg=13148.02, stdev=1195.72 00:31:24.576 lat (usec): min=4682, max=20123, avg=13150.79, stdev=1195.61 00:31:24.576 clat percentiles (usec): 00:31:24.576 | 1.00th=[10421], 5.00th=[11338], 10.00th=[11731], 20.00th=[12256], 00:31:24.576 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:31:24.576 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14615], 95.00th=[15008], 00:31:24.576 | 99.00th=[15926], 99.50th=[16188], 99.90th=[19530], 99.95th=[19792], 00:31:24.576 | 99.99th=[20055] 00:31:24.576 bw ( KiB/s): min=20256, max=21664, per=99.73%, avg=21100.00, stdev=609.87, samples=4 00:31:24.576 iops : min= 5064, max= 5416, avg=5275.00, stdev=152.47, samples=4 00:31:24.576 write: IOPS=5278, BW=20.6MiB/s (21.6MB/s)(41.4MiB/2009msec); 0 zone resets 00:31:24.576 slat (usec): min=2, max=140, avg= 2.95, stdev= 2.18 00:31:24.576 clat (usec): min=2240, max=19447, avg=10930.79, stdev=975.93 00:31:24.576 lat (usec): min=2248, max=19450, avg=10933.75, stdev=975.92 00:31:24.576 clat percentiles (usec): 00:31:24.576 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10159], 00:31:24.576 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:31:24.576 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:31:24.576 | 99.00th=[13042], 99.50th=[13304], 99.90th=[16712], 99.95th=[17957], 00:31:24.576 | 99.99th=[19530] 00:31:24.576 bw ( KiB/s): min=20992, max=21256, per=99.96%, avg=21106.00, stdev=112.83, samples=4 00:31:24.576 iops : min= 5248, max= 5314, avg=5276.50, stdev=28.21, samples=4 00:31:24.576 lat (msec) : 4=0.04%, 10=7.30%, 20=92.64%, 50=0.02% 00:31:24.576 cpu : usr=57.92%, sys=40.74%, ctx=100, majf=0, minf=36 00:31:24.576 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:31:24.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.576 issued rwts: total=10626,10605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.576 00:31:24.576 Run status group 0 (all jobs): 00:31:24.576 READ: bw=20.7MiB/s (21.7MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=41.5MiB (43.5MB), run=2009-2009msec 00:31:24.576 WRITE: bw=20.6MiB/s (21.6MB/s), 20.6MiB/s-20.6MiB/s (21.6MB/s-21.6MB/s), io=41.4MiB (43.4MB), run=2009-2009msec 00:31:24.576 18:52:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:24.576 18:52:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:24.577 18:52:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:28.763 18:52:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:28.763 18:52:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:32.043 18:52:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:32.043 18:52:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:33.941 rmmod nvme_tcp 00:31:33.941 rmmod nvme_fabrics 00:31:33.941 rmmod nvme_keyring 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 844562 ']' 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 844562 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 844562 ']' 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 844562 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 844562 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 844562' 00:31:33.941 killing process with pid 844562 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 844562 00:31:33.941 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 844562 00:31:34.199 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:34.199 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:34.199 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:34.200 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:34.200 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:34.200 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:34.200 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:34.200 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:34.200 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:34.200 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.200 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.200 18:52:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:36.734 00:31:36.734 real 0m38.370s 00:31:36.734 user 2m27.900s 00:31:36.734 sys 0m7.110s 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.734 ************************************ 00:31:36.734 END TEST nvmf_fio_host 00:31:36.734 ************************************ 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.734 ************************************ 00:31:36.734 START TEST nvmf_failover 00:31:36.734 ************************************ 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:36.734 * Looking for test storage... 00:31:36.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:36.734 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.735 --rc genhtml_branch_coverage=1 00:31:36.735 --rc genhtml_function_coverage=1 00:31:36.735 --rc genhtml_legend=1 00:31:36.735 --rc geninfo_all_blocks=1 00:31:36.735 --rc geninfo_unexecuted_blocks=1 00:31:36.735 00:31:36.735 ' 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.735 --rc genhtml_branch_coverage=1 00:31:36.735 --rc genhtml_function_coverage=1 00:31:36.735 --rc genhtml_legend=1 00:31:36.735 --rc geninfo_all_blocks=1 00:31:36.735 --rc geninfo_unexecuted_blocks=1 00:31:36.735 00:31:36.735 ' 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.735 --rc genhtml_branch_coverage=1 00:31:36.735 --rc genhtml_function_coverage=1 00:31:36.735 --rc genhtml_legend=1 00:31:36.735 --rc geninfo_all_blocks=1 00:31:36.735 --rc geninfo_unexecuted_blocks=1 00:31:36.735 00:31:36.735 ' 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:36.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.735 --rc genhtml_branch_coverage=1 00:31:36.735 --rc genhtml_function_coverage=1 00:31:36.735 --rc genhtml_legend=1 00:31:36.735 --rc geninfo_all_blocks=1 00:31:36.735 --rc geninfo_unexecuted_blocks=1 00:31:36.735 00:31:36.735 ' 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:36.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:36.735 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.736 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.736 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.736 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:36.736 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:36.736 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:36.736 18:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:38.638 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:38.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:38.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:38.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:38.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:38.639 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:38.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:38.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:31:38.898 00:31:38.898 --- 10.0.0.2 ping statistics --- 00:31:38.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.898 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:38.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:38.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:31:38.898 00:31:38.898 --- 10.0.0.1 ping statistics --- 00:31:38.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.898 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=850905 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 850905 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 850905 ']' 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.898 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:38.898 [2024-11-17 18:52:25.345707] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:31:38.898 [2024-11-17 18:52:25.345801] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.898 [2024-11-17 18:52:25.423571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:39.157 [2024-11-17 18:52:25.474282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.157 [2024-11-17 18:52:25.474334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.157 [2024-11-17 18:52:25.474352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.157 [2024-11-17 18:52:25.474365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.157 [2024-11-17 18:52:25.474375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.157 [2024-11-17 18:52:25.476065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:39.157 [2024-11-17 18:52:25.477696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:39.157 [2024-11-17 18:52:25.477708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.157 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.157 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:39.157 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:39.157 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:39.157 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:39.157 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.157 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:39.416 [2024-11-17 18:52:25.911883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.416 18:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:39.674 Malloc0 00:31:39.674 18:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:40.239 18:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:40.497 18:52:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.755 [2024-11-17 18:52:27.119425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.755 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:41.012 [2024-11-17 18:52:27.388162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:41.012 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:41.270 [2024-11-17 18:52:27.669183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:41.270 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=851194 00:31:41.270 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:41.270 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:41.270 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 851194 /var/tmp/bdevperf.sock 00:31:41.270 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 851194 ']' 00:31:41.270 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:41.270 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:41.270 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:41.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:41.270 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:41.270 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:41.529 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:41.529 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:41.529 18:52:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:41.787 NVMe0n1 00:31:41.787 18:52:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:42.353 00:31:42.353 18:52:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=851326 00:31:42.353 18:52:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:42.353 18:52:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:43.287 18:52:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:43.546 [2024-11-17 18:52:30.062058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.546 [2024-11-17 18:52:30.062346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 [2024-11-17 18:52:30.062626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2800 is same with the state(6) to be set 00:31:43.547 18:52:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:46.886 18:52:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:47.144 00:31:47.144 18:52:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:47.402 18:52:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:50.688 18:52:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.688 [2024-11-17 18:52:37.074794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.688 18:52:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:51.622 18:52:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:51.880 [2024-11-17 18:52:38.341246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a4570 is same with the state(6) to be set 00:31:51.880 [2024-11-17 18:52:38.341304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a4570 is same with the state(6) to be set 00:31:51.880 [2024-11-17 18:52:38.341327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a4570 is same with the state(6) to be set 00:31:51.880 [2024-11-17 18:52:38.341339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a4570 is same with the state(6) to be set 00:31:51.880 [2024-11-17 18:52:38.341352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a4570 is same with the state(6) to be set 00:31:51.880 [2024-11-17 18:52:38.341364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a4570 is same with the state(6) to be set 00:31:51.880 [2024-11-17 18:52:38.341377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a4570 is same with the state(6) to be set 00:31:51.880 [2024-11-17 18:52:38.341390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a4570 is same with the state(6) to be set 00:31:51.880 18:52:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 851326 00:31:58.447 { 00:31:58.447 "results": [ 00:31:58.447 { 00:31:58.447 "job": "NVMe0n1", 00:31:58.447 "core_mask": "0x1", 00:31:58.447 "workload": "verify", 00:31:58.447 "status": "finished", 00:31:58.447 "verify_range": { 00:31:58.447 "start": 0, 00:31:58.447 "length": 16384 00:31:58.447 }, 00:31:58.447 "queue_depth": 128, 00:31:58.447 "io_size": 4096, 00:31:58.447 "runtime": 15.005941, 00:31:58.447 "iops": 8370.61801055995, 00:31:58.447 "mibps": 32.69772660374981, 00:31:58.447 "io_failed": 14501, 00:31:58.447 "io_timeout": 0, 00:31:58.447 "avg_latency_us": 13681.711984646981, 00:31:58.447 "min_latency_us": 533.997037037037, 00:31:58.447 "max_latency_us": 17573.357037037036 00:31:58.447 } 00:31:58.447 ], 00:31:58.447 "core_count": 1 00:31:58.447 } 00:31:58.447 18:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 851194 00:31:58.447 18:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 851194 ']' 00:31:58.447 18:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 851194 00:31:58.447 18:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:31:58.447 18:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.447 18:52:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 851194 00:31:58.447 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:58.447 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:58.447 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 851194' 00:31:58.447 killing process with pid 851194 00:31:58.447 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 851194 00:31:58.447 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 851194 00:31:58.448 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:58.448 [2024-11-17 18:52:27.737987] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:31:58.448 [2024-11-17 18:52:27.738115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851194 ] 00:31:58.448 [2024-11-17 18:52:27.808560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.448 [2024-11-17 18:52:27.856595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.448 Running I/O for 15 seconds... 00:31:58.448 8257.00 IOPS, 32.25 MiB/s [2024-11-17T17:52:45.024Z] [2024-11-17 18:52:30.063302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.063341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.063412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.063444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.063474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.063503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.063532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.063963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.063993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.064023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.064068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.448 [2024-11-17 18:52:30.064096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.448 [2024-11-17 18:52:30.064527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.448 [2024-11-17 18:52:30.064546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.449 [2024-11-17 18:52:30.064561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.449 [2024-11-17 18:52:30.064590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.449 [2024-11-17 18:52:30.064619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.449 [2024-11-17 18:52:30.064648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.449 [2024-11-17 18:52:30.064703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.449 [2024-11-17 18:52:30.064738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.449 [2024-11-17 18:52:30.064768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.449 [2024-11-17 18:52:30.064798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.449 [2024-11-17 18:52:30.064852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.449 [2024-11-17 18:52:30.064891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.064921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.064950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.064964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.064998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.449 [2024-11-17 18:52:30.065856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.449 [2024-11-17 18:52:30.065871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.065886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.065901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.065916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.065931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.065946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.065961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.065975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.065991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.450 [2024-11-17 18:52:30.066206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.450 [2024-11-17 18:52:30.066236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.450 [2024-11-17 18:52:30.066266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.450 [2024-11-17 18:52:30.066295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.450 [2024-11-17 18:52:30.066325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.450 [2024-11-17 18:52:30.066354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.450 [2024-11-17 18:52:30.066383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.450 [2024-11-17 18:52:30.066412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.066953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.066972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.067002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.067017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.067038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.067052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.067067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.067082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.067097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.067112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.450 [2024-11-17 18:52:30.067128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.450 [2024-11-17 18:52:30.067142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:30.067171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:30.067201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:30.067230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:30.067260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:30.067290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:30.067319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:30.067348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:30.067377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:30.067411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:30.067442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:58.451 [2024-11-17 18:52:30.067489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76088 len:8 PRP1 0x0 PRP2 0x0 00:31:58.451 [2024-11-17 18:52:30.067503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:58.451 [2024-11-17 18:52:30.067533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:58.451 [2024-11-17 18:52:30.067544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76096 len:8 PRP1 0x0 PRP2 0x0 00:31:58.451 [2024-11-17 18:52:30.067557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067625] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:58.451 [2024-11-17 18:52:30.067697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.451 [2024-11-17 18:52:30.067718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.451 [2024-11-17 18:52:30.067747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.451 [2024-11-17 18:52:30.067775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.451 [2024-11-17 18:52:30.067803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:30.067824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:31:58.451 [2024-11-17 18:52:30.067872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c9890 (9): Bad file descriptor 00:31:58.451 [2024-11-17 18:52:30.071785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:58.451 [2024-11-17 18:52:30.188934] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:31:58.451 7745.50 IOPS, 30.26 MiB/s [2024-11-17T17:52:45.027Z] 8023.33 IOPS, 31.34 MiB/s [2024-11-17T17:52:45.027Z] 8190.25 IOPS, 31.99 MiB/s [2024-11-17T17:52:45.027Z] [2024-11-17 18:52:33.773293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.773968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.773997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.774013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.451 [2024-11-17 18:52:33.774028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.451 [2024-11-17 18:52:33.774042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.452 [2024-11-17 18:52:33.774071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.452 [2024-11-17 18:52:33.774334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.452 [2024-11-17 18:52:33.774363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.452 [2024-11-17 18:52:33.774392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.452 [2024-11-17 18:52:33.774908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.452 [2024-11-17 18:52:33.774938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.774953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.774985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.775001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.775016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.775032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.775061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.775080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.775095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.452 [2024-11-17 18:52:33.775109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.452 [2024-11-17 18:52:33.775123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.453 [2024-11-17 18:52:33.775738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.775969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.775984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.453 [2024-11-17 18:52:33.776350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.453 [2024-11-17 18:52:33.776364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.776762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.454 [2024-11-17 18:52:33.776792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.454 [2024-11-17 18:52:33.776822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.454 [2024-11-17 18:52:33.776852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.454 [2024-11-17 18:52:33.776883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.454 [2024-11-17 18:52:33.776913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.454 [2024-11-17 18:52:33.776943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.776969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.454 [2024-11-17 18:52:33.776984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.454 [2024-11-17 18:52:33.777457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ec980 is same with the state(6) to be set 00:31:58.454 [2024-11-17 18:52:33.777488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:58.454 [2024-11-17 18:52:33.777503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:58.454 [2024-11-17 18:52:33.777515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96408 len:8 PRP1 0x0 PRP2 0x0 00:31:58.454 [2024-11-17 18:52:33.777528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777592] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:58.454 [2024-11-17 18:52:33.777645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.454 [2024-11-17 18:52:33.777671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.454 [2024-11-17 18:52:33.777710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.454 [2024-11-17 18:52:33.777724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.454 [2024-11-17 18:52:33.777738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:33.777753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.455 [2024-11-17 18:52:33.777766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:33.777779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:31:58.455 [2024-11-17 18:52:33.781336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:58.455 [2024-11-17 18:52:33.781388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c9890 (9): Bad file descriptor 00:31:58.455 [2024-11-17 18:52:33.896926] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:31:58.455 8049.00 IOPS, 31.44 MiB/s [2024-11-17T17:52:45.031Z] 8143.00 IOPS, 31.81 MiB/s [2024-11-17T17:52:45.031Z] 8222.29 IOPS, 32.12 MiB/s [2024-11-17T17:52:45.031Z] 8281.50 IOPS, 32.35 MiB/s [2024-11-17T17:52:45.031Z] 8321.11 IOPS, 32.50 MiB/s [2024-11-17T17:52:45.031Z] [2024-11-17 18:52:38.341769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.341812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.341840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.341857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.341874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.341889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.341905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.341922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.341938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.341991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.455 [2024-11-17 18:52:38.342951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.455 [2024-11-17 18:52:38.342964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.342994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.456 [2024-11-17 18:52:38.343854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.456 [2024-11-17 18:52:38.343886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.456 [2024-11-17 18:52:38.343917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.456 [2024-11-17 18:52:38.343949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.343989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.456 [2024-11-17 18:52:38.344008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.344025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.456 [2024-11-17 18:52:38.344055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.344070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.456 [2024-11-17 18:52:38.344085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.344100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.456 [2024-11-17 18:52:38.344114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.456 [2024-11-17 18:52:38.344129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.344979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.344993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.345023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.345053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.345083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.345112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.345142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.345171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.345205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.345234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.345264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.345293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.457 [2024-11-17 18:52:38.345323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.457 [2024-11-17 18:52:38.345338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:58.458 [2024-11-17 18:52:38.345849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.345883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:58.458 [2024-11-17 18:52:38.345899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:58.458 [2024-11-17 18:52:38.345912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48360 len:8 PRP1 0x0 PRP2 0x0 00:31:58.458 [2024-11-17 18:52:38.345926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.346002] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:58.458 [2024-11-17 18:52:38.346039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.458 [2024-11-17 18:52:38.346058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.346073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.458 [2024-11-17 18:52:38.346092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.346107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.458 [2024-11-17 18:52:38.346121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.346137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:58.458 [2024-11-17 18:52:38.346151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:58.458 [2024-11-17 18:52:38.346165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:31:58.458 [2024-11-17 18:52:38.346220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c9890 (9): Bad file descriptor 00:31:58.458 [2024-11-17 18:52:38.349952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:31:58.458 [2024-11-17 18:52:38.465109] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:31:58.458 8228.60 IOPS, 32.14 MiB/s [2024-11-17T17:52:45.034Z] 8274.00 IOPS, 32.32 MiB/s [2024-11-17T17:52:45.034Z] 8300.33 IOPS, 32.42 MiB/s [2024-11-17T17:52:45.034Z] 8331.46 IOPS, 32.54 MiB/s [2024-11-17T17:52:45.034Z] 8352.43 IOPS, 32.63 MiB/s 00:31:58.458 Latency(us) 00:31:58.458 [2024-11-17T17:52:45.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.458 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:58.458 Verification LBA range: start 0x0 length 0x4000 00:31:58.458 NVMe0n1 : 15.01 8370.62 32.70 966.35 0.00 13681.71 534.00 17573.36 00:31:58.458 [2024-11-17T17:52:45.034Z] =================================================================================================================== 00:31:58.458 [2024-11-17T17:52:45.034Z] Total : 8370.62 32.70 966.35 0.00 13681.71 534.00 17573.36 00:31:58.458 Received shutdown signal, test time was about 15.000000 seconds 00:31:58.458 00:31:58.458 Latency(us) 00:31:58.458 [2024-11-17T17:52:45.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.458 [2024-11-17T17:52:45.034Z] =================================================================================================================== 00:31:58.458 [2024-11-17T17:52:45.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=853060 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 853060 /var/tmp/bdevperf.sock 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 853060 ']' 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:58.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:58.458 [2024-11-17 18:52:44.738483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:58.458 18:52:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:58.458 [2024-11-17 18:52:44.999205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:58.716 18:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:58.974 NVMe0n1 00:31:58.974 18:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:59.539 00:31:59.539 18:52:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:31:59.797 00:31:59.797 18:52:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:59.797 18:52:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:00.055 18:52:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:00.312 18:52:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:03.590 18:52:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:03.591 18:52:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:03.591 18:52:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=853723 00:32:03.591 18:52:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:03.591 18:52:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 853723 00:32:04.966 { 00:32:04.966 "results": [ 00:32:04.966 { 00:32:04.966 "job": "NVMe0n1", 00:32:04.966 "core_mask": "0x1", 00:32:04.966 "workload": "verify", 00:32:04.966 "status": "finished", 00:32:04.966 "verify_range": { 00:32:04.966 "start": 0, 00:32:04.966 "length": 16384 00:32:04.966 }, 00:32:04.966 "queue_depth": 128, 00:32:04.966 "io_size": 4096, 00:32:04.966 "runtime": 1.008894, 00:32:04.966 "iops": 8518.238784252855, 00:32:04.966 "mibps": 33.274370250987715, 00:32:04.966 "io_failed": 0, 00:32:04.966 "io_timeout": 0, 00:32:04.966 "avg_latency_us": 14935.351077323541, 00:32:04.966 "min_latency_us": 1268.242962962963, 00:32:04.966 "max_latency_us": 15728.64 00:32:04.966 } 00:32:04.966 ], 00:32:04.966 "core_count": 1 00:32:04.966 } 00:32:04.966 18:52:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:04.966 [2024-11-17 18:52:44.246421] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:04.966 [2024-11-17 18:52:44.246532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853060 ] 00:32:04.966 [2024-11-17 18:52:44.316732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.966 [2024-11-17 18:52:44.360505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.966 [2024-11-17 18:52:46.746897] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:04.966 [2024-11-17 18:52:46.746993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.966 [2024-11-17 18:52:46.747016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.966 [2024-11-17 18:52:46.747034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.966 [2024-11-17 18:52:46.747063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.966 [2024-11-17 18:52:46.747078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.966 [2024-11-17 18:52:46.747091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.966 [2024-11-17 18:52:46.747104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:04.966 [2024-11-17 18:52:46.747134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:04.966 [2024-11-17 18:52:46.747149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:04.966 [2024-11-17 18:52:46.747193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:04.966 [2024-11-17 18:52:46.747233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14eb890 (9): Bad file descriptor 00:32:04.966 [2024-11-17 18:52:46.760157] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:04.966 Running I/O for 1 seconds... 00:32:04.966 8450.00 IOPS, 33.01 MiB/s 00:32:04.967 Latency(us) 00:32:04.967 [2024-11-17T17:52:51.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.967 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:04.967 Verification LBA range: start 0x0 length 0x4000 00:32:04.967 NVMe0n1 : 1.01 8518.24 33.27 0.00 0.00 14935.35 1268.24 15728.64 00:32:04.967 [2024-11-17T17:52:51.543Z] =================================================================================================================== 00:32:04.967 [2024-11-17T17:52:51.543Z] Total : 8518.24 33.27 0.00 0.00 14935.35 1268.24 15728.64 00:32:04.967 18:52:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:04.967 18:52:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:04.967 18:52:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:05.225 18:52:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:05.225 18:52:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:05.482 18:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:06.048 18:52:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 853060 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 853060 ']' 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 853060 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 853060 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 853060' 00:32:09.331 killing process with pid 853060 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 853060 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 853060 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:09.331 18:52:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:09.589 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:09.589 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:09.589 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:09.589 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:09.589 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:09.589 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:09.589 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:09.589 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:09.589 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:09.589 rmmod nvme_tcp 00:32:09.589 rmmod nvme_fabrics 00:32:09.589 rmmod nvme_keyring 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 850905 ']' 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 850905 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 850905 ']' 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 850905 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 850905 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 850905' 00:32:09.848 killing process with pid 850905 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 850905 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 850905 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.848 18:52:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:12.388 00:32:12.388 real 0m35.705s 00:32:12.388 user 2m5.513s 00:32:12.388 sys 0m6.186s 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:12.388 ************************************ 00:32:12.388 END TEST nvmf_failover 00:32:12.388 ************************************ 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.388 ************************************ 00:32:12.388 START TEST nvmf_host_discovery 00:32:12.388 ************************************ 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:12.388 * Looking for test storage... 00:32:12.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:12.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.388 --rc genhtml_branch_coverage=1 00:32:12.388 --rc genhtml_function_coverage=1 00:32:12.388 --rc genhtml_legend=1 00:32:12.388 --rc geninfo_all_blocks=1 00:32:12.388 --rc geninfo_unexecuted_blocks=1 00:32:12.388 00:32:12.388 ' 00:32:12.388 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:12.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.389 --rc genhtml_branch_coverage=1 00:32:12.389 --rc genhtml_function_coverage=1 00:32:12.389 --rc genhtml_legend=1 00:32:12.389 --rc geninfo_all_blocks=1 00:32:12.389 --rc geninfo_unexecuted_blocks=1 00:32:12.389 00:32:12.389 ' 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:12.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.389 --rc genhtml_branch_coverage=1 00:32:12.389 --rc genhtml_function_coverage=1 00:32:12.389 --rc genhtml_legend=1 00:32:12.389 --rc geninfo_all_blocks=1 00:32:12.389 --rc geninfo_unexecuted_blocks=1 00:32:12.389 00:32:12.389 ' 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:12.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:12.389 --rc genhtml_branch_coverage=1 00:32:12.389 --rc genhtml_function_coverage=1 00:32:12.389 --rc genhtml_legend=1 00:32:12.389 --rc geninfo_all_blocks=1 00:32:12.389 --rc geninfo_unexecuted_blocks=1 00:32:12.389 00:32:12.389 ' 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:12.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:12.389 18:52:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:14.288 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:14.288 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:14.288 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:14.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:14.288 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:14.289 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:14.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:14.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:32:14.547 00:32:14.547 --- 10.0.0.2 ping statistics --- 00:32:14.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.547 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:14.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:14.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:32:14.547 00:32:14.547 --- 10.0.0.1 ping statistics --- 00:32:14.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.547 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=856451 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 856451 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 856451 ']' 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.547 18:53:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.547 [2024-11-17 18:53:00.950633] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:14.547 [2024-11-17 18:53:00.950727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.547 [2024-11-17 18:53:01.023825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.547 [2024-11-17 18:53:01.067695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:14.547 [2024-11-17 18:53:01.067748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:14.547 [2024-11-17 18:53:01.067772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:14.547 [2024-11-17 18:53:01.067783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:14.547 [2024-11-17 18:53:01.067792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:14.547 [2024-11-17 18:53:01.068396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.806 [2024-11-17 18:53:01.198793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.806 [2024-11-17 18:53:01.207014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.806 null0 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.806 null1 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=856474 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 856474 /tmp/host.sock 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 856474 ']' 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:14.806 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.806 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.806 [2024-11-17 18:53:01.287119] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:14.806 [2024-11-17 18:53:01.287203] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856474 ] 00:32:14.806 [2024-11-17 18:53:01.361530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.065 [2024-11-17 18:53:01.411353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:15.065 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.323 [2024-11-17 18:53:01.816581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:15.323 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.581 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:15.581 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:15.581 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:15.581 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:15.581 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:15.581 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:15.582 18:53:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:16.148 [2024-11-17 18:53:02.596305] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:16.148 [2024-11-17 18:53:02.596342] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:16.148 [2024-11-17 18:53:02.596373] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:16.148 [2024-11-17 18:53:02.682631] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:16.406 [2024-11-17 18:53:02.899965] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:16.406 [2024-11-17 18:53:02.901145] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2319740:1 started. 00:32:16.406 [2024-11-17 18:53:02.902915] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:16.406 [2024-11-17 18:53:02.902939] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:16.406 [2024-11-17 18:53:02.905229] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2319740 was disconnected and freed. delete nvme_qpair. 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.665 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:16.666 [2024-11-17 18:53:03.182816] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2319940:1 started. 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.666 [2024-11-17 18:53:03.185765] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2319940 was disconnected and freed. delete nvme_qpair. 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.666 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.924 [2024-11-17 18:53:03.273180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:16.924 [2024-11-17 18:53:03.274126] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:16.924 [2024-11-17 18:53:03.274159] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.924 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.925 [2024-11-17 18:53:03.400525] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:16.925 18:53:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:17.183 [2024-11-17 18:53:03.702264] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:17.183 [2024-11-17 18:53:03.702321] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:17.183 [2024-11-17 18:53:03.702354] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:17.183 [2024-11-17 18:53:03.702362] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.118 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.119 [2024-11-17 18:53:04.501520] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:18.119 [2024-11-17 18:53:04.501560] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:18.119 [2024-11-17 18:53:04.504206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-11-17 18:53:04.504239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-11-17 18:53:04.504271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-11-17 18:53:04.504285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-11-17 18:53:04.504300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-11-17 18:53:04.504321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-11-17 18:53:04.504336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.119 [2024-11-17 18:53:04.504349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.119 [2024-11-17 18:53:04.504362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:18.119 [2024-11-17 18:53:04.514198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.119 [2024-11-17 18:53:04.524242] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:18.119 [2024-11-17 18:53:04.524268] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:18.119 [2024-11-17 18:53:04.524279] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:18.119 [2024-11-17 18:53:04.524288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:18.119 [2024-11-17 18:53:04.524318] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:18.119 [2024-11-17 18:53:04.524510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.119 [2024-11-17 18:53:04.524542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb900 with addr=10.0.0.2, port=4420 00:32:18.119 [2024-11-17 18:53:04.524559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.119 [2024-11-17 18:53:04.524583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.119 [2024-11-17 18:53:04.524632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:18.119 [2024-11-17 18:53:04.524653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:18.119 [2024-11-17 18:53:04.524670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:18.119 [2024-11-17 18:53:04.524694] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:18.119 [2024-11-17 18:53:04.524706] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:18.119 [2024-11-17 18:53:04.524715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:18.119 [2024-11-17 18:53:04.534350] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:18.119 [2024-11-17 18:53:04.534371] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:18.119 [2024-11-17 18:53:04.534380] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:18.119 [2024-11-17 18:53:04.534387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:18.119 [2024-11-17 18:53:04.534411] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:18.119 [2024-11-17 18:53:04.534578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.119 [2024-11-17 18:53:04.534612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb900 with addr=10.0.0.2, port=4420 00:32:18.119 [2024-11-17 18:53:04.534630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.119 [2024-11-17 18:53:04.534653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.119 [2024-11-17 18:53:04.534704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:18.119 [2024-11-17 18:53:04.534725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:18.119 [2024-11-17 18:53:04.534740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:18.119 [2024-11-17 18:53:04.534753] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:18.119 [2024-11-17 18:53:04.534762] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:18.119 [2024-11-17 18:53:04.534770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:18.119 [2024-11-17 18:53:04.544446] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:18.119 [2024-11-17 18:53:04.544466] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:18.119 [2024-11-17 18:53:04.544476] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:18.119 [2024-11-17 18:53:04.544483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:18.119 [2024-11-17 18:53:04.544506] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:18.119 [2024-11-17 18:53:04.544707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.119 [2024-11-17 18:53:04.544737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb900 with addr=10.0.0.2, port=4420 00:32:18.119 [2024-11-17 18:53:04.544754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.119 [2024-11-17 18:53:04.544777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.119 [2024-11-17 18:53:04.544863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:18.119 [2024-11-17 18:53:04.544887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:18.119 [2024-11-17 18:53:04.544902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:18.119 [2024-11-17 18:53:04.544914] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:18.119 [2024-11-17 18:53:04.544923] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:18.119 [2024-11-17 18:53:04.544931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.119 [2024-11-17 18:53:04.554541] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:18.119 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:18.119 [2024-11-17 18:53:04.554563] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:18.119 [2024-11-17 18:53:04.554577] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:18.119 [2024-11-17 18:53:04.554584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:18.119 [2024-11-17 18:53:04.554609] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.120 [2024-11-17 18:53:04.554799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:18.120 [2024-11-17 18:53:04.554830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb900 with addr=10.0.0.2, port=4420 00:32:18.120 [2024-11-17 18:53:04.554851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.120 [2024-11-17 18:53:04.554875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.120 [2024-11-17 18:53:04.554918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:18.120 [2024-11-17 18:53:04.554947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:18.120 [2024-11-17 18:53:04.554961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:18.120 [2024-11-17 18:53:04.554974] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:18.120 [2024-11-17 18:53:04.554983] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:18.120 [2024-11-17 18:53:04.554991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:18.120 [2024-11-17 18:53:04.564644] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:18.120 [2024-11-17 18:53:04.564671] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:18.120 [2024-11-17 18:53:04.564705] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:18.120 [2024-11-17 18:53:04.564715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:18.120 [2024-11-17 18:53:04.564744] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:18.120 [2024-11-17 18:53:04.564868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.120 [2024-11-17 18:53:04.564898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb900 with addr=10.0.0.2, port=4420 00:32:18.120 [2024-11-17 18:53:04.564916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.120 [2024-11-17 18:53:04.564954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.120 [2024-11-17 18:53:04.565004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:18.120 [2024-11-17 18:53:04.565024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:18.120 [2024-11-17 18:53:04.565039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:18.120 [2024-11-17 18:53:04.565052] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:18.120 [2024-11-17 18:53:04.565061] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:18.120 [2024-11-17 18:53:04.565068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:18.120 [2024-11-17 18:53:04.574778] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:18.120 [2024-11-17 18:53:04.574801] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:18.120 [2024-11-17 18:53:04.574811] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:18.120 [2024-11-17 18:53:04.574819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:18.120 [2024-11-17 18:53:04.574844] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:18.120 [2024-11-17 18:53:04.574971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.120 [2024-11-17 18:53:04.575000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb900 with addr=10.0.0.2, port=4420 00:32:18.120 [2024-11-17 18:53:04.575017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.120 [2024-11-17 18:53:04.575040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.120 [2024-11-17 18:53:04.575074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:18.120 [2024-11-17 18:53:04.575093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:18.120 [2024-11-17 18:53:04.575107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:18.120 [2024-11-17 18:53:04.575120] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:18.120 [2024-11-17 18:53:04.575129] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:18.120 [2024-11-17 18:53:04.575137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.120 [2024-11-17 18:53:04.584878] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:18.120 [2024-11-17 18:53:04.584900] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:18.120 [2024-11-17 18:53:04.584910] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:18.120 [2024-11-17 18:53:04.584917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:18.120 [2024-11-17 18:53:04.584942] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:18.120 [2024-11-17 18:53:04.585111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.120 [2024-11-17 18:53:04.585140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb900 with addr=10.0.0.2, port=4420 00:32:18.120 [2024-11-17 18:53:04.585165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.120 [2024-11-17 18:53:04.585189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.120 [2024-11-17 18:53:04.585236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:18.120 [2024-11-17 18:53:04.585256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:18.120 [2024-11-17 18:53:04.585271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:18.120 [2024-11-17 18:53:04.585283] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:18.120 [2024-11-17 18:53:04.585293] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:18.120 [2024-11-17 18:53:04.585300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:18.120 [2024-11-17 18:53:04.594984] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:18.120 [2024-11-17 18:53:04.595007] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:18.120 [2024-11-17 18:53:04.595017] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:18.120 [2024-11-17 18:53:04.595026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:18.120 [2024-11-17 18:53:04.595065] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:18.120 [2024-11-17 18:53:04.595271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.120 [2024-11-17 18:53:04.595301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb900 with addr=10.0.0.2, port=4420 00:32:18.120 [2024-11-17 18:53:04.595318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.120 [2024-11-17 18:53:04.595340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.120 [2024-11-17 18:53:04.595375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:18.120 [2024-11-17 18:53:04.595394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:18.120 [2024-11-17 18:53:04.595408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:18.120 [2024-11-17 18:53:04.595421] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:18.120 [2024-11-17 18:53:04.595430] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:18.120 [2024-11-17 18:53:04.595437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.120 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:18.120 [2024-11-17 18:53:04.605107] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:18.120 [2024-11-17 18:53:04.605128] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:18.121 [2024-11-17 18:53:04.605137] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:18.121 [2024-11-17 18:53:04.605144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:18.121 [2024-11-17 18:53:04.605167] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:18.121 [2024-11-17 18:53:04.605280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.121 [2024-11-17 18:53:04.605322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb900 with addr=10.0.0.2, port=4420 00:32:18.121 [2024-11-17 18:53:04.605339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.121 [2024-11-17 18:53:04.605361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.121 [2024-11-17 18:53:04.605546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:18.121 [2024-11-17 18:53:04.605569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:18.121 [2024-11-17 18:53:04.605598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:18.121 [2024-11-17 18:53:04.605610] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:18.121 [2024-11-17 18:53:04.605619] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:18.121 [2024-11-17 18:53:04.605626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:18.121 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.121 [2024-11-17 18:53:04.615202] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:18.121 [2024-11-17 18:53:04.615227] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:18.121 [2024-11-17 18:53:04.615238] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:18.121 [2024-11-17 18:53:04.615247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:18.121 [2024-11-17 18:53:04.615273] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:18.121 [2024-11-17 18:53:04.615481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.121 [2024-11-17 18:53:04.615513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb900 with addr=10.0.0.2, port=4420 00:32:18.121 [2024-11-17 18:53:04.615532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.121 [2024-11-17 18:53:04.615561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.121 [2024-11-17 18:53:04.615586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:18.121 [2024-11-17 18:53:04.615602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:18.121 [2024-11-17 18:53:04.615618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:18.121 [2024-11-17 18:53:04.615632] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:18.121 [2024-11-17 18:53:04.615643] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:18.121 [2024-11-17 18:53:04.615652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:18.121 [2024-11-17 18:53:04.625307] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:18.121 [2024-11-17 18:53:04.625328] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:18.121 [2024-11-17 18:53:04.625338] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:18.121 [2024-11-17 18:53:04.625345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:18.121 [2024-11-17 18:53:04.625368] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:18.121 [2024-11-17 18:53:04.625501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:18.121 [2024-11-17 18:53:04.625531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb900 with addr=10.0.0.2, port=4420 00:32:18.121 [2024-11-17 18:53:04.625548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb900 is same with the state(6) to be set 00:32:18.121 [2024-11-17 18:53:04.625570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb900 (9): Bad file descriptor 00:32:18.121 [2024-11-17 18:53:04.625604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:18.121 [2024-11-17 18:53:04.625623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:18.121 [2024-11-17 18:53:04.625636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:18.121 [2024-11-17 18:53:04.625649] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:18.121 [2024-11-17 18:53:04.625684] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:18.121 [2024-11-17 18:53:04.625695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:18.121 [2024-11-17 18:53:04.629882] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:18.121 [2024-11-17 18:53:04.629914] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:18.121 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:32:18.121 18:53:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:19.496 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.497 18:53:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.430 [2024-11-17 18:53:06.877413] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:20.430 [2024-11-17 18:53:06.877450] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:20.430 [2024-11-17 18:53:06.877471] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:20.430 [2024-11-17 18:53:06.964762] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:20.689 [2024-11-17 18:53:07.030418] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:20.689 [2024-11-17 18:53:07.031206] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2325700:1 started. 00:32:20.689 [2024-11-17 18:53:07.033319] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:20.689 [2024-11-17 18:53:07.033368] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:20.689 [2024-11-17 18:53:07.036211] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2325700 was disconnected and freed. delete nvme_qpair. 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.689 request: 00:32:20.689 { 00:32:20.689 "name": "nvme", 00:32:20.689 "trtype": "tcp", 00:32:20.689 "traddr": "10.0.0.2", 00:32:20.689 "adrfam": "ipv4", 00:32:20.689 "trsvcid": "8009", 00:32:20.689 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:20.689 "wait_for_attach": true, 00:32:20.689 "method": "bdev_nvme_start_discovery", 00:32:20.689 "req_id": 1 00:32:20.689 } 00:32:20.689 Got JSON-RPC error response 00:32:20.689 response: 00:32:20.689 { 00:32:20.689 "code": -17, 00:32:20.689 "message": "File exists" 00:32:20.689 } 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.689 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.690 request: 00:32:20.690 { 00:32:20.690 "name": "nvme_second", 00:32:20.690 "trtype": "tcp", 00:32:20.690 "traddr": "10.0.0.2", 00:32:20.690 "adrfam": "ipv4", 00:32:20.690 "trsvcid": "8009", 00:32:20.690 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:20.690 "wait_for_attach": true, 00:32:20.690 "method": "bdev_nvme_start_discovery", 00:32:20.690 "req_id": 1 00:32:20.690 } 00:32:20.690 Got JSON-RPC error response 00:32:20.690 response: 00:32:20.690 { 00:32:20.690 "code": -17, 00:32:20.690 "message": "File exists" 00:32:20.690 } 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.690 18:53:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.064 [2024-11-17 18:53:08.244973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.064 [2024-11-17 18:53:08.245051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb250 with addr=10.0.0.2, port=8010 00:32:22.064 [2024-11-17 18:53:08.245082] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:22.064 [2024-11-17 18:53:08.245096] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:22.064 [2024-11-17 18:53:08.245109] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:22.998 [2024-11-17 18:53:09.247378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.998 [2024-11-17 18:53:09.247445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb250 with addr=10.0.0.2, port=8010 00:32:22.998 [2024-11-17 18:53:09.247476] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:22.998 [2024-11-17 18:53:09.247491] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:22.998 [2024-11-17 18:53:09.247505] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:23.932 [2024-11-17 18:53:10.249545] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:23.932 request: 00:32:23.932 { 00:32:23.932 "name": "nvme_second", 00:32:23.932 "trtype": "tcp", 00:32:23.932 "traddr": "10.0.0.2", 00:32:23.932 "adrfam": "ipv4", 00:32:23.932 "trsvcid": "8010", 00:32:23.932 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:23.932 "wait_for_attach": false, 00:32:23.932 "attach_timeout_ms": 3000, 00:32:23.932 "method": "bdev_nvme_start_discovery", 00:32:23.932 "req_id": 1 00:32:23.932 } 00:32:23.932 Got JSON-RPC error response 00:32:23.932 response: 00:32:23.932 { 00:32:23.932 "code": -110, 00:32:23.932 "message": "Connection timed out" 00:32:23.932 } 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 856474 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:23.932 rmmod nvme_tcp 00:32:23.932 rmmod nvme_fabrics 00:32:23.932 rmmod nvme_keyring 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 856451 ']' 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 856451 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 856451 ']' 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 856451 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 856451 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 856451' 00:32:23.932 killing process with pid 856451 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 856451 00:32:23.932 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 856451 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.191 18:53:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.197 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:26.197 00:32:26.197 real 0m14.144s 00:32:26.197 user 0m20.782s 00:32:26.197 sys 0m2.979s 00:32:26.197 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:26.197 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:26.197 ************************************ 00:32:26.197 END TEST nvmf_host_discovery 00:32:26.197 ************************************ 00:32:26.197 18:53:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:26.197 18:53:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:26.197 18:53:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:26.197 18:53:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.197 ************************************ 00:32:26.197 START TEST nvmf_host_multipath_status 00:32:26.197 ************************************ 00:32:26.197 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:26.197 * Looking for test storage... 00:32:26.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:26.197 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:26.197 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:32:26.197 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:26.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.456 --rc genhtml_branch_coverage=1 00:32:26.456 --rc genhtml_function_coverage=1 00:32:26.456 --rc genhtml_legend=1 00:32:26.456 --rc geninfo_all_blocks=1 00:32:26.456 --rc geninfo_unexecuted_blocks=1 00:32:26.456 00:32:26.456 ' 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:26.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.456 --rc genhtml_branch_coverage=1 00:32:26.456 --rc genhtml_function_coverage=1 00:32:26.456 --rc genhtml_legend=1 00:32:26.456 --rc geninfo_all_blocks=1 00:32:26.456 --rc geninfo_unexecuted_blocks=1 00:32:26.456 00:32:26.456 ' 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:26.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.456 --rc genhtml_branch_coverage=1 00:32:26.456 --rc genhtml_function_coverage=1 00:32:26.456 --rc genhtml_legend=1 00:32:26.456 --rc geninfo_all_blocks=1 00:32:26.456 --rc geninfo_unexecuted_blocks=1 00:32:26.456 00:32:26.456 ' 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:26.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.456 --rc genhtml_branch_coverage=1 00:32:26.456 --rc genhtml_function_coverage=1 00:32:26.456 --rc genhtml_legend=1 00:32:26.456 --rc geninfo_all_blocks=1 00:32:26.456 --rc geninfo_unexecuted_blocks=1 00:32:26.456 00:32:26.456 ' 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.456 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:26.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:26.457 18:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:28.991 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:28.991 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:28.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:28.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:28.992 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:28.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:32:28.992 00:32:28.992 --- 10.0.0.2 ping statistics --- 00:32:28.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.992 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:32:28.992 00:32:28.992 --- 10.0.0.1 ping statistics --- 00:32:28.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.992 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=859654 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 859654 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 859654 ']' 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:28.992 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:28.992 [2024-11-17 18:53:15.298574] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:32:28.992 [2024-11-17 18:53:15.298652] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.992 [2024-11-17 18:53:15.374467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:28.992 [2024-11-17 18:53:15.422523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.992 [2024-11-17 18:53:15.422572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.992 [2024-11-17 18:53:15.422596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.992 [2024-11-17 18:53:15.422607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.992 [2024-11-17 18:53:15.422617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.992 [2024-11-17 18:53:15.424111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.992 [2024-11-17 18:53:15.424116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.993 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:28.993 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:28.993 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:28.993 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:28.993 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:28.993 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.993 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=859654 00:32:28.993 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:29.250 [2024-11-17 18:53:15.800769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.250 18:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:29.817 Malloc0 00:32:29.817 18:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:29.817 18:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:30.382 18:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:30.382 [2024-11-17 18:53:16.906402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:30.382 18:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:30.639 [2024-11-17 18:53:17.171114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:30.639 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=859933 00:32:30.639 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:30.639 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:30.639 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 859933 /var/tmp/bdevperf.sock 00:32:30.639 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 859933 ']' 00:32:30.639 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:30.639 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.639 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:30.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:30.639 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.639 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:30.897 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.897 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:30.897 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:31.462 18:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:31.720 Nvme0n1 00:32:31.720 18:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:32.285 Nvme0n1 00:32:32.285 18:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:32.285 18:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:34.182 18:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:34.182 18:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:34.440 18:53:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:34.698 18:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:36.071 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:36.071 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:36.071 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.071 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:36.071 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.071 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:36.071 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.071 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:36.328 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:36.328 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:36.328 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.328 18:53:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:36.586 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.586 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:36.586 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.586 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:36.843 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:36.843 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:36.843 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.843 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:37.101 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.101 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:37.101 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.101 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:37.359 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.359 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:37.359 18:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:37.617 18:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:38.182 18:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:39.116 18:53:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:39.116 18:53:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:39.116 18:53:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.116 18:53:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:39.373 18:53:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:39.373 18:53:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:39.373 18:53:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.373 18:53:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:39.631 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.631 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:39.631 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.631 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:39.889 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.889 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:39.889 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.889 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:40.154 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.154 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:40.154 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.154 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:40.418 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.418 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:40.418 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.418 18:53:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:40.676 18:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.676 18:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:40.676 18:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:40.934 18:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:41.192 18:53:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:42.126 18:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:42.126 18:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:42.126 18:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.126 18:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:42.692 18:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.692 18:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:42.692 18:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.692 18:53:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:42.692 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:42.692 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:42.692 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.692 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:42.950 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.950 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:42.950 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.950 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:43.515 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.515 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:43.515 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.515 18:53:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:43.515 18:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:43.515 18:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:43.515 18:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.515 18:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:44.081 18:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.081 18:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:44.081 18:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:44.081 18:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:44.339 18:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:45.713 18:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:45.713 18:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:45.713 18:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.713 18:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:45.713 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:45.713 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:45.713 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.713 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:45.971 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:45.971 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:45.971 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:45.971 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:46.229 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.229 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:46.229 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.229 18:53:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:46.487 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.487 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:46.487 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.487 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:46.745 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.745 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:46.745 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.745 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:47.311 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:47.311 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:47.311 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:47.311 18:53:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:47.569 18:53:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:48.940 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:48.940 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:48.940 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.940 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:48.940 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:48.940 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:48.940 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.940 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:49.198 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:49.198 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:49.198 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.198 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:49.455 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.455 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:49.455 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.455 18:53:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:49.713 18:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.713 18:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:49.713 18:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.713 18:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:49.971 18:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:49.971 18:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:49.971 18:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.971 18:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:50.228 18:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:50.228 18:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:50.228 18:53:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:50.486 18:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:50.743 18:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:52.151 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:52.151 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:52.151 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.151 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:52.151 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:52.151 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:52.151 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.151 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:52.409 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.409 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:52.409 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.409 18:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:52.667 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.667 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:52.667 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.667 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:52.924 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.924 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:52.924 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.924 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:53.181 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:53.181 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:53.181 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.181 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:53.439 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.439 18:53:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:53.696 18:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:53.696 18:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:53.954 18:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:54.212 18:53:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:55.586 18:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:55.586 18:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:55.586 18:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.586 18:53:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:55.586 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.586 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:55.586 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.586 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:55.844 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.844 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:55.844 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.844 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:56.102 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.102 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:56.102 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.102 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:56.360 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.360 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:56.360 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.360 18:53:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:56.618 18:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.618 18:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:56.618 18:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.618 18:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:56.876 18:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.876 18:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:56.876 18:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:57.467 18:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:57.467 18:53:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:58.441 18:53:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:58.441 18:53:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:58.441 18:53:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.441 18:53:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:58.699 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.699 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:58.699 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.699 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:59.266 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.266 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:59.266 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.266 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:59.266 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.266 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:59.266 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.266 18:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:59.524 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.524 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:59.782 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.782 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:00.040 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.040 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:00.040 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.040 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:00.298 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.298 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:00.298 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:00.556 18:53:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:00.814 18:53:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:01.747 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:01.747 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:01.747 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.747 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:02.005 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.005 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:02.005 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.005 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:02.263 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.263 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:02.263 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.263 18:53:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.521 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.521 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.521 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.521 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:02.788 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.788 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:02.788 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.788 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:03.051 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.051 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:03.051 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.051 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:03.308 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.308 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:03.308 18:53:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:03.566 18:53:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:04.131 18:53:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:05.065 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:05.065 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:05.065 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.065 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:05.323 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.323 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:05.323 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.323 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:05.582 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.582 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:05.582 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.582 18:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.840 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.840 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.840 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.840 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:06.098 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.098 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:06.098 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.098 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:06.356 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.356 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:06.356 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.356 18:53:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:06.614 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:06.614 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 859933 00:33:06.614 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 859933 ']' 00:33:06.614 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 859933 00:33:06.614 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:06.614 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:06.614 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 859933 00:33:06.615 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:06.615 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:06.615 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 859933' 00:33:06.615 killing process with pid 859933 00:33:06.615 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 859933 00:33:06.615 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 859933 00:33:06.615 { 00:33:06.615 "results": [ 00:33:06.615 { 00:33:06.615 "job": "Nvme0n1", 00:33:06.615 "core_mask": "0x4", 00:33:06.615 "workload": "verify", 00:33:06.615 "status": "terminated", 00:33:06.615 "verify_range": { 00:33:06.615 "start": 0, 00:33:06.615 "length": 16384 00:33:06.615 }, 00:33:06.615 "queue_depth": 128, 00:33:06.615 "io_size": 4096, 00:33:06.615 "runtime": 34.262075, 00:33:06.615 "iops": 8088.622770220426, 00:33:06.615 "mibps": 31.59618269617354, 00:33:06.615 "io_failed": 0, 00:33:06.615 "io_timeout": 0, 00:33:06.615 "avg_latency_us": 15799.865899670316, 00:33:06.615 "min_latency_us": 488.4859259259259, 00:33:06.615 "max_latency_us": 4026531.84 00:33:06.615 } 00:33:06.615 ], 00:33:06.615 "core_count": 1 00:33:06.615 } 00:33:06.890 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 859933 00:33:06.890 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:06.890 [2024-11-17 18:53:17.233954] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:33:06.890 [2024-11-17 18:53:17.234115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859933 ] 00:33:06.890 [2024-11-17 18:53:17.305533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.890 [2024-11-17 18:53:17.351705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:06.890 Running I/O for 90 seconds... 00:33:06.890 8524.00 IOPS, 33.30 MiB/s [2024-11-17T17:53:53.466Z] 8595.50 IOPS, 33.58 MiB/s [2024-11-17T17:53:53.466Z] 8653.67 IOPS, 33.80 MiB/s [2024-11-17T17:53:53.466Z] 8673.00 IOPS, 33.88 MiB/s [2024-11-17T17:53:53.466Z] 8669.80 IOPS, 33.87 MiB/s [2024-11-17T17:53:53.466Z] 8648.17 IOPS, 33.78 MiB/s [2024-11-17T17:53:53.466Z] 8634.29 IOPS, 33.73 MiB/s [2024-11-17T17:53:53.466Z] 8628.75 IOPS, 33.71 MiB/s [2024-11-17T17:53:53.466Z] 8617.89 IOPS, 33.66 MiB/s [2024-11-17T17:53:53.466Z] 8625.80 IOPS, 33.69 MiB/s [2024-11-17T17:53:53.466Z] 8634.91 IOPS, 33.73 MiB/s [2024-11-17T17:53:53.466Z] 8641.58 IOPS, 33.76 MiB/s [2024-11-17T17:53:53.466Z] 8632.62 IOPS, 33.72 MiB/s [2024-11-17T17:53:53.466Z] 8631.07 IOPS, 33.72 MiB/s [2024-11-17T17:53:53.466Z] [2024-11-17 18:53:33.839768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.890 [2024-11-17 18:53:33.839830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.839894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.839916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.839942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.839960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.840000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.840017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.840039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.840055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.840077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.840094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.840132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.840149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.840190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.840207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.841703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.841737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.841814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.841851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.841879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.841898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.841924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.841941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.841967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.841984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.842027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.842044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.842069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.842086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.842111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.842128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.842152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.842168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.842193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.842209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.842233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.842249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.842274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.842291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.842331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.842347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:06.890 [2024-11-17 18:53:33.842372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.890 [2024-11-17 18:53:33.842392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.842960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.842999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.843903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.843920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.844183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.891 [2024-11-17 18:53:33.844207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.844240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.844259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.844292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.844310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.844339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.844356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:06.891 [2024-11-17 18:53:33.844385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.891 [2024-11-17 18:53:33.844402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.844463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.844524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.844568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.844610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.844653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.844722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.844766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.844811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.844854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.844903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.844948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.844975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:33.845889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:33.845906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:06.892 8623.67 IOPS, 33.69 MiB/s [2024-11-17T17:53:53.468Z] 8084.69 IOPS, 31.58 MiB/s [2024-11-17T17:53:53.468Z] 7609.12 IOPS, 29.72 MiB/s [2024-11-17T17:53:53.468Z] 7186.39 IOPS, 28.07 MiB/s [2024-11-17T17:53:53.468Z] 6808.32 IOPS, 26.59 MiB/s [2024-11-17T17:53:53.468Z] 6901.55 IOPS, 26.96 MiB/s [2024-11-17T17:53:53.468Z] 6979.48 IOPS, 27.26 MiB/s [2024-11-17T17:53:53.468Z] 7088.18 IOPS, 27.69 MiB/s [2024-11-17T17:53:53.468Z] 7276.78 IOPS, 28.42 MiB/s [2024-11-17T17:53:53.468Z] 7439.50 IOPS, 29.06 MiB/s [2024-11-17T17:53:53.468Z] 7579.48 IOPS, 29.61 MiB/s [2024-11-17T17:53:53.468Z] 7627.23 IOPS, 29.79 MiB/s [2024-11-17T17:53:53.468Z] 7667.93 IOPS, 29.95 MiB/s [2024-11-17T17:53:53.468Z] 7702.86 IOPS, 30.09 MiB/s [2024-11-17T17:53:53.468Z] 7788.59 IOPS, 30.42 MiB/s [2024-11-17T17:53:53.468Z] 7908.70 IOPS, 30.89 MiB/s [2024-11-17T17:53:53.468Z] 8009.16 IOPS, 31.29 MiB/s [2024-11-17T17:53:53.468Z] [2024-11-17 18:53:50.404545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:50.404640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:50.404687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:50.404731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:50.404769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.892 [2024-11-17 18:53:50.404787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:06.892 [2024-11-17 18:53:50.404810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.404827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.404850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.404867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.404892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.404909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.404931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.404949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.404972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.405000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.405040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.405058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.405081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.405097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.405120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.405136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.405174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.405190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.405211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.405236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.405259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.405275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.405296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.405312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.405333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.405349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.405369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.405385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.405406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.893 [2024-11-17 18:53:50.405423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.408950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.408978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.409021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.409045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.409061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.409083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.409099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.409121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.409137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.409158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.893 [2024-11-17 18:53:50.409175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:06.893 [2024-11-17 18:53:50.409214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.894 [2024-11-17 18:53:50.409385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.894 [2024-11-17 18:53:50.409425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.894 [2024-11-17 18:53:50.409884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.409962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.409984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.410021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.410044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.410060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.410081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.410097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.410118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.410134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.410155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.410171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.410191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.410209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.410230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.410246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.410268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.410283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.410305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.410321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.411991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.412032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.412060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.894 [2024-11-17 18:53:50.412078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.412100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.894 [2024-11-17 18:53:50.412116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.412139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.894 [2024-11-17 18:53:50.412161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:06.894 [2024-11-17 18:53:50.412184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.894 [2024-11-17 18:53:50.412201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.412481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.412976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.412995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.413035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.413074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.413115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.413155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.413194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.413250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.413305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.895 [2024-11-17 18:53:50.413921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.413960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.413991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.895 [2024-11-17 18:53:50.414007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:06.895 [2024-11-17 18:53:50.414043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.414059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.414081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.414113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.414137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.414154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.414176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.414193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.414824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.414848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.414875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.414893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.414916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.414932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.414956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.414974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.415038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.415574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.415812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.415875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.415892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.416709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.416733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.416761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.416779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.416802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.416818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.416841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.416859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.416882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.896 [2024-11-17 18:53:50.416898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.416926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.416943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.416967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.416984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.417023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.417039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.417061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.417077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.417100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.896 [2024-11-17 18:53:50.417132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:06.896 [2024-11-17 18:53:50.417153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.417169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.417192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.417208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.417230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.417245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.417268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.417284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.417306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.417322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.417344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.417360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.418738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.418778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.418805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.418829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.418852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.418870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.418892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.418909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.418948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.418966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.418989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.419126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.897 [2024-11-17 18:53:50.419786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.419826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.419874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.419916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.419956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.419979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.419996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.420026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.420043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.422932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.422975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.423003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.423036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:06.897 [2024-11-17 18:53:50.423060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.897 [2024-11-17 18:53:50.423077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.423116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.423152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.423189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.423242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.423296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.423344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.423399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.423452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.423493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.423532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.423571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.423620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.423661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.423721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.423761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.423800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.423839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.423884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.423924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.423964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.423986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.424008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.424048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.424088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.424128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.424168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.424223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.424276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.424313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.424367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.424409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.898 [2024-11-17 18:53:50.424447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.424491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.424530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.424567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.424605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.424656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.424717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.424742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.424760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:06.898 [2024-11-17 18:53:50.426357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.898 [2024-11-17 18:53:50.426380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.426406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.426423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.426444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.426460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.426498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.426519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.426559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.426576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.426614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.426632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.427484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.427508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.427533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.427565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.427588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.427619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.427644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.427661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.427691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.427720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.427744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.427761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.427783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.427800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.427822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.427839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.427862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.427879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.427901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.427918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.427946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.427964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.428023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.428076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.428113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.428149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.428185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.428222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.428258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.428294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.428346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.428398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.428438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.428501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.428541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.428581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.428621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.428661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.428691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.899 [2024-11-17 18:53:50.428720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:06.899 [2024-11-17 18:53:50.430061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.899 [2024-11-17 18:53:50.430094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.430140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.430180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.430222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.430278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.430331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.430374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.430414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.430451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.430489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.430526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.430564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.430601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.430655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.430722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.430761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.430801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.430841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.430883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.430925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.430965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.430998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.431015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.431038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.431055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.431077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.431094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.431117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.431133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.431157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.431188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.431211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.431227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.431880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.431904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.431932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.431965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.431989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.432009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.432048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.432064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.432101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.432118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.432141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.432157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.432180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.432196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.432218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.432235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.432257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.432274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.432297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.432313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.432350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.900 [2024-11-17 18:53:50.432367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.432403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.900 [2024-11-17 18:53:50.432419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:06.900 [2024-11-17 18:53:50.432441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.432456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.434519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.434544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.434573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.434592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.434615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.434634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.434662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.434689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.434714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.434731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.434754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.434771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.434794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.434811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.434834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.434851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.434874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.434891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.434914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.434931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.434954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.434982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.435283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.435335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.435389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.435545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.435622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.435684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.435733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.435814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.435933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.435956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.435986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.436018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.436049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.436070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.436086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.436106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.901 [2024-11-17 18:53:50.436122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.436143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.901 [2024-11-17 18:53:50.436158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:06.901 [2024-11-17 18:53:50.436178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.436194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.436215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.436231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.436256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.436272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.436293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.436308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.436330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.436346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.438883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.438909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.438937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.438971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.438995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.439307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.439343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.439380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.439438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.439495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.439535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.439575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.439615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.439832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.439875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.439937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.439979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.440002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.440019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.440056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.440072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.440093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.440124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.440146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.440160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.440181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.440196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.440217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.440232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.440253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.902 [2024-11-17 18:53:50.440269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.440289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.440305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:06.902 [2024-11-17 18:53:50.440326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.902 [2024-11-17 18:53:50.440361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.440415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.440456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.440513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.440553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.440593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.440633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.440682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.440732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.440771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.440812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.440851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.440874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.440891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.441762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.441787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.441814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.441832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.441857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.441874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.441897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.441915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.441937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.441968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.441992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.442008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.442030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.442046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.442069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.442085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.442106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.442123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.442144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.442160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.442182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.903 [2024-11-17 18:53:50.442212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.442896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.442921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.442954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.442978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.443002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.443020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.443054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.443071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.443094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.443111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.443134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.443151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.443174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.443191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.443214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.443232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.443255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.443273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.443314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.443331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.443354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.903 [2024-11-17 18:53:50.443384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:06.903 [2024-11-17 18:53:50.443406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.443421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.443442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.443472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.443496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.443516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.443539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.443555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.443576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.443592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.443614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.443630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.443652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.443700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.443733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.443750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.443773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.443790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.443812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.443829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.443852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.443868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.443891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.443908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.446136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.446178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.446220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.446258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.446688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.446740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.446782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.446933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.446954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.446984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.447006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.447022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.447043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.447058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.447078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.447094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.447115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.904 [2024-11-17 18:53:50.447130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.447151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.447166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.447186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.904 [2024-11-17 18:53:50.447201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:06.904 [2024-11-17 18:53:50.447226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.447242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.447263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.447278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.447299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.447315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.447337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.447352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.450143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.450188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.450224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.450909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.450948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.450981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.451002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.451043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.451082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.451122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.451162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.451201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.451241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.451281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.451336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.451389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.451426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.451462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.905 [2024-11-17 18:53:50.451499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.451541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:06.905 [2024-11-17 18:53:50.451578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.905 [2024-11-17 18:53:50.451595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.451618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.451650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.451681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.451700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.451730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.451747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.451770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.451787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.451809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.451827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.451850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.451867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.451889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.451907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.451929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.451946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.451990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.452005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.452042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.452058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.452084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.452100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.452905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.452930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.452958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.452991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.453045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.453083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.453118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.453155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.453608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.453624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.454349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.454375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.454402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.454421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.454461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.454478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.454516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.454532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.454553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.454570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.454590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.454610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.454632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.454648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.454695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.906 [2024-11-17 18:53:50.454739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.454763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.454779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.906 [2024-11-17 18:53:50.454800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.906 [2024-11-17 18:53:50.454817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.454839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.454870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.454894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.454911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.454934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.454951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.454979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.454996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.455035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.455075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.455115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.455159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.455217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.455270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.455307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.455349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.455387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.455440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.455478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.455533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.455589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.455629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.455670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.455703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.455721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.456291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.456316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.456344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.456362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.456385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.456403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.456427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.456444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.456483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.456500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.456522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.456539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.456561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.456593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.456616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.456633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.456669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.456694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.456732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.456749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.456771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.456787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.458811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.458836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.458869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.458889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.458931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.458948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.458995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.459012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.459033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.459050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.459071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.459086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.459122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.907 [2024-11-17 18:53:50.459139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:06.907 [2024-11-17 18:53:50.459162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.907 [2024-11-17 18:53:50.459178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.459216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.459271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.459311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.459368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.459409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.459453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.459495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.459536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.459576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.459616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.459656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.459706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.459763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.459816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.459857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.459899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.459922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.459940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.461486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.461515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.461544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.461563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.461587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.461604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.461627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.461644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.461667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.461695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.461724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.461741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.461764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.461781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.461804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.461821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.461860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.461877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.461899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.461916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.461953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.461969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.462017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.462033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.462054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.462070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.462094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.908 [2024-11-17 18:53:50.462110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.462132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.462148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.462169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.462185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:06.908 [2024-11-17 18:53:50.462205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.908 [2024-11-17 18:53:50.462221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.462569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.462606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.462728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.462840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.462862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.462878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.465218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.465296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.465335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.465378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.465432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.465468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.465504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.465541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.465577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.465613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.465649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.465709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.465765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.465805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.465844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.465886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.465926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.909 [2024-11-17 18:53:50.465965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.465987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.466003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.466025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.466042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.466065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.466095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:06.909 [2024-11-17 18:53:50.466118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.909 [2024-11-17 18:53:50.466133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.466171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.466208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.466244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.466297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.466336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.466393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.466438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.466477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.466518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.466557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.466597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.466636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.466700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.466743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.466760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.467920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.467944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.467971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.467991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.468033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.468088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.468148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.468185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.468221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.468258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.468295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.468348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.910 [2024-11-17 18:53:50.468403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.468444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.468484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.468523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.468564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.468603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.910 [2024-11-17 18:53:50.468626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:06.910 [2024-11-17 18:53:50.468662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:06.910 8055.94 IOPS, 31.47 MiB/s [2024-11-17T17:53:53.486Z] 8074.91 IOPS, 31.54 MiB/s [2024-11-17T17:53:53.486Z] 8090.38 IOPS, 31.60 MiB/s [2024-11-17T17:53:53.486Z] Received shutdown signal, test time was about 34.262864 seconds 00:33:06.910 00:33:06.910 Latency(us) 00:33:06.910 [2024-11-17T17:53:53.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.910 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:06.910 Verification LBA range: start 0x0 length 0x4000 00:33:06.910 Nvme0n1 : 34.26 8088.62 31.60 0.00 0.00 15799.87 488.49 4026531.84 00:33:06.910 [2024-11-17T17:53:53.486Z] =================================================================================================================== 00:33:06.910 [2024-11-17T17:53:53.486Z] Total : 8088.62 31.60 0.00 0.00 15799.87 488.49 4026531.84 00:33:06.910 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.169 rmmod nvme_tcp 00:33:07.169 rmmod nvme_fabrics 00:33:07.169 rmmod nvme_keyring 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 859654 ']' 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 859654 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 859654 ']' 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 859654 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 859654 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 859654' 00:33:07.169 killing process with pid 859654 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 859654 00:33:07.169 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 859654 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.427 18:53:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.973 18:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.973 00:33:09.973 real 0m43.244s 00:33:09.973 user 2m11.690s 00:33:09.973 sys 0m10.644s 00:33:09.973 18:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.973 18:53:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:09.973 ************************************ 00:33:09.973 END TEST nvmf_host_multipath_status 00:33:09.973 ************************************ 00:33:09.973 18:53:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:09.973 18:53:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:09.973 18:53:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.973 18:53:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.973 ************************************ 00:33:09.973 START TEST nvmf_discovery_remove_ifc 00:33:09.974 ************************************ 00:33:09.974 18:53:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:09.974 * Looking for test storage... 00:33:09.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:09.974 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:09.974 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:33:09.974 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:09.974 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:09.974 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.974 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.974 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.974 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.975 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.975 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.975 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.975 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.975 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.975 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.975 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.975 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:09.975 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:09.975 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.976 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:09.977 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.977 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:09.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.977 --rc genhtml_branch_coverage=1 00:33:09.977 --rc genhtml_function_coverage=1 00:33:09.977 --rc genhtml_legend=1 00:33:09.977 --rc geninfo_all_blocks=1 00:33:09.977 --rc geninfo_unexecuted_blocks=1 00:33:09.977 00:33:09.977 ' 00:33:09.977 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:09.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.977 --rc genhtml_branch_coverage=1 00:33:09.977 --rc genhtml_function_coverage=1 00:33:09.977 --rc genhtml_legend=1 00:33:09.977 --rc geninfo_all_blocks=1 00:33:09.977 --rc geninfo_unexecuted_blocks=1 00:33:09.977 00:33:09.977 ' 00:33:09.977 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:09.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.977 --rc genhtml_branch_coverage=1 00:33:09.977 --rc genhtml_function_coverage=1 00:33:09.977 --rc genhtml_legend=1 00:33:09.977 --rc geninfo_all_blocks=1 00:33:09.977 --rc geninfo_unexecuted_blocks=1 00:33:09.977 00:33:09.977 ' 00:33:09.977 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:09.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.977 --rc genhtml_branch_coverage=1 00:33:09.977 --rc genhtml_function_coverage=1 00:33:09.977 --rc genhtml_legend=1 00:33:09.977 --rc geninfo_all_blocks=1 00:33:09.977 --rc geninfo_unexecuted_blocks=1 00:33:09.978 00:33:09.978 ' 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.978 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.979 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.979 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.979 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.979 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.979 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.979 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.979 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.979 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:09.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.980 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.984 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.984 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.985 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.985 18:53:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:11.891 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:11.891 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:11.891 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.891 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:11.892 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:11.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:11.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:33:11.892 00:33:11.892 --- 10.0.0.2 ping statistics --- 00:33:11.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.892 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:11.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:11.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:33:11.892 00:33:11.892 --- 10.0.0.1 ping statistics --- 00:33:11.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.892 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=866387 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 866387 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 866387 ']' 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:11.892 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.892 [2024-11-17 18:53:58.381472] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:33:11.892 [2024-11-17 18:53:58.381562] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.892 [2024-11-17 18:53:58.452479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.150 [2024-11-17 18:53:58.495560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.150 [2024-11-17 18:53:58.495618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.150 [2024-11-17 18:53:58.495646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.150 [2024-11-17 18:53:58.495658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.150 [2024-11-17 18:53:58.495667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.150 [2024-11-17 18:53:58.496300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.150 [2024-11-17 18:53:58.644535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.150 [2024-11-17 18:53:58.652774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:12.150 null0 00:33:12.150 [2024-11-17 18:53:58.684686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=866410 00:33:12.150 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:12.151 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 866410 /tmp/host.sock 00:33:12.151 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 866410 ']' 00:33:12.151 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:12.151 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.151 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:12.151 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:12.151 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.151 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.409 [2024-11-17 18:53:58.751036] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:33:12.409 [2024-11-17 18:53:58.751116] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866410 ] 00:33:12.409 [2024-11-17 18:53:58.817393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.409 [2024-11-17 18:53:58.862132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.409 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.409 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:12.409 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:12.409 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:12.409 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.409 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.409 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.409 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:12.409 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.409 18:53:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.668 18:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.668 18:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:12.668 18:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.668 18:53:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.602 [2024-11-17 18:54:00.132283] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:13.602 [2024-11-17 18:54:00.132322] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:13.602 [2024-11-17 18:54:00.132344] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:13.860 [2024-11-17 18:54:00.259780] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:13.861 [2024-11-17 18:54:00.360623] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:13.861 [2024-11-17 18:54:00.361725] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x17ea370:1 started. 00:33:13.861 [2024-11-17 18:54:00.363412] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:13.861 [2024-11-17 18:54:00.363468] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:13.861 [2024-11-17 18:54:00.363499] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:13.861 [2024-11-17 18:54:00.363522] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:13.861 [2024-11-17 18:54:00.363558] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:13.861 [2024-11-17 18:54:00.370800] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x17ea370 was disconnected and freed. delete nvme_qpair. 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:13.861 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:14.119 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:14.119 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:14.119 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.119 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:14.119 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.119 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.119 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:14.119 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:14.119 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.119 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:14.119 18:54:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:15.052 18:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:15.052 18:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.052 18:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:15.052 18:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.052 18:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:15.052 18:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:15.052 18:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:15.052 18:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.052 18:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:15.052 18:54:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:15.987 18:54:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:15.987 18:54:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.987 18:54:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:15.987 18:54:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.987 18:54:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:15.987 18:54:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:15.987 18:54:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:15.987 18:54:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.245 18:54:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:16.245 18:54:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:17.178 18:54:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:17.178 18:54:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.178 18:54:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:17.178 18:54:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.178 18:54:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:17.178 18:54:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:17.178 18:54:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:17.178 18:54:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.178 18:54:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:17.178 18:54:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:18.113 18:54:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:18.113 18:54:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.113 18:54:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:18.113 18:54:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.113 18:54:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.113 18:54:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:18.113 18:54:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:18.113 18:54:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.113 18:54:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:18.113 18:54:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:19.487 18:54:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:19.487 18:54:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.487 18:54:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:19.487 18:54:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:19.487 18:54:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.487 18:54:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:19.487 18:54:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:19.487 18:54:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.487 18:54:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:19.488 18:54:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:19.488 [2024-11-17 18:54:05.804852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:19.488 [2024-11-17 18:54:05.804928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.488 [2024-11-17 18:54:05.804950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.488 [2024-11-17 18:54:05.804983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.488 [2024-11-17 18:54:05.804996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.488 [2024-11-17 18:54:05.805009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.488 [2024-11-17 18:54:05.805023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.488 [2024-11-17 18:54:05.805037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.488 [2024-11-17 18:54:05.805050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.488 [2024-11-17 18:54:05.805064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.488 [2024-11-17 18:54:05.805076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.488 [2024-11-17 18:54:05.805088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c6bc0 is same with the state(6) to be set 00:33:19.488 [2024-11-17 18:54:05.814870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c6bc0 (9): Bad file descriptor 00:33:19.488 [2024-11-17 18:54:05.824916] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:19.488 [2024-11-17 18:54:05.824939] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:19.488 [2024-11-17 18:54:05.824973] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:19.488 [2024-11-17 18:54:05.824982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:19.488 [2024-11-17 18:54:05.825018] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:20.422 18:54:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:20.422 18:54:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.422 18:54:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.422 18:54:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:20.422 18:54:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.422 18:54:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:20.422 18:54:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:20.422 [2024-11-17 18:54:06.835716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:20.422 [2024-11-17 18:54:06.835782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c6bc0 with addr=10.0.0.2, port=4420 00:33:20.422 [2024-11-17 18:54:06.835806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c6bc0 is same with the state(6) to be set 00:33:20.422 [2024-11-17 18:54:06.835847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c6bc0 (9): Bad file descriptor 00:33:20.422 [2024-11-17 18:54:06.836296] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:20.422 [2024-11-17 18:54:06.836336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:20.422 [2024-11-17 18:54:06.836354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:20.422 [2024-11-17 18:54:06.836369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:20.422 [2024-11-17 18:54:06.836382] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:20.422 [2024-11-17 18:54:06.836393] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:20.422 [2024-11-17 18:54:06.836401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:20.422 [2024-11-17 18:54:06.836414] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:20.422 [2024-11-17 18:54:06.836423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:20.422 18:54:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.422 18:54:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:20.422 18:54:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:21.356 [2024-11-17 18:54:07.838914] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:21.356 [2024-11-17 18:54:07.838964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:21.356 [2024-11-17 18:54:07.838986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:21.356 [2024-11-17 18:54:07.838998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:21.356 [2024-11-17 18:54:07.839010] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:21.356 [2024-11-17 18:54:07.839046] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:21.356 [2024-11-17 18:54:07.839057] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:21.356 [2024-11-17 18:54:07.839064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:21.356 [2024-11-17 18:54:07.839111] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:21.356 [2024-11-17 18:54:07.839166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.356 [2024-11-17 18:54:07.839188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.356 [2024-11-17 18:54:07.839206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.356 [2024-11-17 18:54:07.839219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.356 [2024-11-17 18:54:07.839232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.356 [2024-11-17 18:54:07.839245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.356 [2024-11-17 18:54:07.839259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.356 [2024-11-17 18:54:07.839271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.356 [2024-11-17 18:54:07.839285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.356 [2024-11-17 18:54:07.839297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.356 [2024-11-17 18:54:07.839310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:21.356 [2024-11-17 18:54:07.839402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b62d0 (9): Bad file descriptor 00:33:21.356 [2024-11-17 18:54:07.840398] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:21.356 [2024-11-17 18:54:07.840424] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:21.356 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:21.356 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.356 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:21.356 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.356 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:21.356 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.356 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:21.356 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.356 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:21.356 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.356 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.614 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:21.614 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:21.614 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.614 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.614 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:21.614 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.614 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:21.614 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:21.614 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.614 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:21.614 18:54:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:22.548 18:54:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:22.548 18:54:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.548 18:54:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:22.548 18:54:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:22.548 18:54:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.548 18:54:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.548 18:54:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:22.548 18:54:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.548 18:54:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:22.548 18:54:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:23.481 [2024-11-17 18:54:09.857332] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:23.481 [2024-11-17 18:54:09.857358] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:23.481 [2024-11-17 18:54:09.857379] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:23.481 [2024-11-17 18:54:09.944654] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:23.481 18:54:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.481 18:54:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.481 18:54:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.481 18:54:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.481 18:54:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.481 18:54:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.481 18:54:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.481 18:54:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.481 18:54:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:23.481 18:54:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:23.739 [2024-11-17 18:54:10.167941] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:23.739 [2024-11-17 18:54:10.169067] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x17c8f20:1 started. 00:33:23.739 [2024-11-17 18:54:10.170458] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:23.739 [2024-11-17 18:54:10.170500] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:23.739 [2024-11-17 18:54:10.170531] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:23.739 [2024-11-17 18:54:10.170554] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:23.739 [2024-11-17 18:54:10.170569] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:23.739 [2024-11-17 18:54:10.217384] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x17c8f20 was disconnected and freed. delete nvme_qpair. 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 866410 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 866410 ']' 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 866410 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 866410 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 866410' 00:33:24.673 killing process with pid 866410 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 866410 00:33:24.673 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 866410 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:24.931 rmmod nvme_tcp 00:33:24.931 rmmod nvme_fabrics 00:33:24.931 rmmod nvme_keyring 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 866387 ']' 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 866387 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 866387 ']' 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 866387 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 866387 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 866387' 00:33:24.931 killing process with pid 866387 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 866387 00:33:24.931 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 866387 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.191 18:54:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.096 18:54:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.096 00:33:27.096 real 0m17.651s 00:33:27.096 user 0m25.550s 00:33:27.096 sys 0m3.050s 00:33:27.096 18:54:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.096 18:54:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:27.096 ************************************ 00:33:27.096 END TEST nvmf_discovery_remove_ifc 00:33:27.096 ************************************ 00:33:27.096 18:54:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:27.096 18:54:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:27.096 18:54:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.096 18:54:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.355 ************************************ 00:33:27.355 START TEST nvmf_identify_kernel_target 00:33:27.355 ************************************ 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:27.355 * Looking for test storage... 00:33:27.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.355 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:27.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.356 --rc genhtml_branch_coverage=1 00:33:27.356 --rc genhtml_function_coverage=1 00:33:27.356 --rc genhtml_legend=1 00:33:27.356 --rc geninfo_all_blocks=1 00:33:27.356 --rc geninfo_unexecuted_blocks=1 00:33:27.356 00:33:27.356 ' 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:27.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.356 --rc genhtml_branch_coverage=1 00:33:27.356 --rc genhtml_function_coverage=1 00:33:27.356 --rc genhtml_legend=1 00:33:27.356 --rc geninfo_all_blocks=1 00:33:27.356 --rc geninfo_unexecuted_blocks=1 00:33:27.356 00:33:27.356 ' 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:27.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.356 --rc genhtml_branch_coverage=1 00:33:27.356 --rc genhtml_function_coverage=1 00:33:27.356 --rc genhtml_legend=1 00:33:27.356 --rc geninfo_all_blocks=1 00:33:27.356 --rc geninfo_unexecuted_blocks=1 00:33:27.356 00:33:27.356 ' 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:27.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.356 --rc genhtml_branch_coverage=1 00:33:27.356 --rc genhtml_function_coverage=1 00:33:27.356 --rc genhtml_legend=1 00:33:27.356 --rc geninfo_all_blocks=1 00:33:27.356 --rc geninfo_unexecuted_blocks=1 00:33:27.356 00:33:27.356 ' 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:27.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:27.356 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.357 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:27.357 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:27.357 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:27.357 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.357 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.357 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.357 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:27.357 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:27.357 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:27.357 18:54:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.976 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:29.976 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:29.976 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:29.976 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:29.976 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:29.976 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:29.976 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:29.976 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:29.976 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:29.976 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:29.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:29.977 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:29.977 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:29.977 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:29.977 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:29.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:29.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:33:29.978 00:33:29.978 --- 10.0.0.2 ping statistics --- 00:33:29.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.978 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:29.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:29.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:33:29.978 00:33:29.978 --- 10.0.0.1 ping statistics --- 00:33:29.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.978 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:29.978 18:54:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:30.916 Waiting for block devices as requested 00:33:30.916 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:31.175 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:31.175 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:31.434 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:31.434 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:31.434 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:31.434 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:31.695 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:31.695 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:31.695 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:31.953 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:31.953 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:31.953 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:31.953 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:32.212 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:32.212 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:32.212 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:32.471 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:32.471 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:32.471 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:32.471 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:32.471 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:32.471 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:32.471 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:32.471 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:32.471 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:32.471 No valid GPT data, bailing 00:33:32.471 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:32.472 18:54:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:32.472 00:33:32.472 Discovery Log Number of Records 2, Generation counter 2 00:33:32.472 =====Discovery Log Entry 0====== 00:33:32.472 trtype: tcp 00:33:32.472 adrfam: ipv4 00:33:32.472 subtype: current discovery subsystem 00:33:32.472 treq: not specified, sq flow control disable supported 00:33:32.472 portid: 1 00:33:32.472 trsvcid: 4420 00:33:32.472 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:32.472 traddr: 10.0.0.1 00:33:32.472 eflags: none 00:33:32.472 sectype: none 00:33:32.472 =====Discovery Log Entry 1====== 00:33:32.472 trtype: tcp 00:33:32.472 adrfam: ipv4 00:33:32.472 subtype: nvme subsystem 00:33:32.472 treq: not specified, sq flow control disable supported 00:33:32.472 portid: 1 00:33:32.472 trsvcid: 4420 00:33:32.472 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:32.472 traddr: 10.0.0.1 00:33:32.472 eflags: none 00:33:32.472 sectype: none 00:33:32.472 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:32.472 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:32.732 ===================================================== 00:33:32.732 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:32.732 ===================================================== 00:33:32.732 Controller Capabilities/Features 00:33:32.732 ================================ 00:33:32.732 Vendor ID: 0000 00:33:32.732 Subsystem Vendor ID: 0000 00:33:32.732 Serial Number: 06730e60c295091c921b 00:33:32.732 Model Number: Linux 00:33:32.732 Firmware Version: 6.8.9-20 00:33:32.732 Recommended Arb Burst: 0 00:33:32.732 IEEE OUI Identifier: 00 00 00 00:33:32.732 Multi-path I/O 00:33:32.732 May have multiple subsystem ports: No 00:33:32.732 May have multiple controllers: No 00:33:32.732 Associated with SR-IOV VF: No 00:33:32.732 Max Data Transfer Size: Unlimited 00:33:32.732 Max Number of Namespaces: 0 00:33:32.732 Max Number of I/O Queues: 1024 00:33:32.732 NVMe Specification Version (VS): 1.3 00:33:32.732 NVMe Specification Version (Identify): 1.3 00:33:32.732 Maximum Queue Entries: 1024 00:33:32.732 Contiguous Queues Required: No 00:33:32.732 Arbitration Mechanisms Supported 00:33:32.732 Weighted Round Robin: Not Supported 00:33:32.732 Vendor Specific: Not Supported 00:33:32.732 Reset Timeout: 7500 ms 00:33:32.732 Doorbell Stride: 4 bytes 00:33:32.732 NVM Subsystem Reset: Not Supported 00:33:32.732 Command Sets Supported 00:33:32.732 NVM Command Set: Supported 00:33:32.732 Boot Partition: Not Supported 00:33:32.732 Memory Page Size Minimum: 4096 bytes 00:33:32.732 Memory Page Size Maximum: 4096 bytes 00:33:32.732 Persistent Memory Region: Not Supported 00:33:32.732 Optional Asynchronous Events Supported 00:33:32.732 Namespace Attribute Notices: Not Supported 00:33:32.732 Firmware Activation Notices: Not Supported 00:33:32.732 ANA Change Notices: Not Supported 00:33:32.732 PLE Aggregate Log Change Notices: Not Supported 00:33:32.732 LBA Status Info Alert Notices: Not Supported 00:33:32.732 EGE Aggregate Log Change Notices: Not Supported 00:33:32.732 Normal NVM Subsystem Shutdown event: Not Supported 00:33:32.733 Zone Descriptor Change Notices: Not Supported 00:33:32.733 Discovery Log Change Notices: Supported 00:33:32.733 Controller Attributes 00:33:32.733 128-bit Host Identifier: Not Supported 00:33:32.733 Non-Operational Permissive Mode: Not Supported 00:33:32.733 NVM Sets: Not Supported 00:33:32.733 Read Recovery Levels: Not Supported 00:33:32.733 Endurance Groups: Not Supported 00:33:32.733 Predictable Latency Mode: Not Supported 00:33:32.733 Traffic Based Keep ALive: Not Supported 00:33:32.733 Namespace Granularity: Not Supported 00:33:32.733 SQ Associations: Not Supported 00:33:32.733 UUID List: Not Supported 00:33:32.733 Multi-Domain Subsystem: Not Supported 00:33:32.733 Fixed Capacity Management: Not Supported 00:33:32.733 Variable Capacity Management: Not Supported 00:33:32.733 Delete Endurance Group: Not Supported 00:33:32.733 Delete NVM Set: Not Supported 00:33:32.733 Extended LBA Formats Supported: Not Supported 00:33:32.733 Flexible Data Placement Supported: Not Supported 00:33:32.733 00:33:32.733 Controller Memory Buffer Support 00:33:32.733 ================================ 00:33:32.733 Supported: No 00:33:32.733 00:33:32.733 Persistent Memory Region Support 00:33:32.733 ================================ 00:33:32.733 Supported: No 00:33:32.733 00:33:32.733 Admin Command Set Attributes 00:33:32.733 ============================ 00:33:32.733 Security Send/Receive: Not Supported 00:33:32.733 Format NVM: Not Supported 00:33:32.733 Firmware Activate/Download: Not Supported 00:33:32.733 Namespace Management: Not Supported 00:33:32.733 Device Self-Test: Not Supported 00:33:32.733 Directives: Not Supported 00:33:32.733 NVMe-MI: Not Supported 00:33:32.733 Virtualization Management: Not Supported 00:33:32.733 Doorbell Buffer Config: Not Supported 00:33:32.733 Get LBA Status Capability: Not Supported 00:33:32.733 Command & Feature Lockdown Capability: Not Supported 00:33:32.733 Abort Command Limit: 1 00:33:32.733 Async Event Request Limit: 1 00:33:32.733 Number of Firmware Slots: N/A 00:33:32.733 Firmware Slot 1 Read-Only: N/A 00:33:32.733 Firmware Activation Without Reset: N/A 00:33:32.733 Multiple Update Detection Support: N/A 00:33:32.733 Firmware Update Granularity: No Information Provided 00:33:32.733 Per-Namespace SMART Log: No 00:33:32.733 Asymmetric Namespace Access Log Page: Not Supported 00:33:32.733 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:32.733 Command Effects Log Page: Not Supported 00:33:32.733 Get Log Page Extended Data: Supported 00:33:32.733 Telemetry Log Pages: Not Supported 00:33:32.733 Persistent Event Log Pages: Not Supported 00:33:32.733 Supported Log Pages Log Page: May Support 00:33:32.733 Commands Supported & Effects Log Page: Not Supported 00:33:32.733 Feature Identifiers & Effects Log Page:May Support 00:33:32.733 NVMe-MI Commands & Effects Log Page: May Support 00:33:32.733 Data Area 4 for Telemetry Log: Not Supported 00:33:32.733 Error Log Page Entries Supported: 1 00:33:32.733 Keep Alive: Not Supported 00:33:32.733 00:33:32.733 NVM Command Set Attributes 00:33:32.733 ========================== 00:33:32.733 Submission Queue Entry Size 00:33:32.733 Max: 1 00:33:32.733 Min: 1 00:33:32.733 Completion Queue Entry Size 00:33:32.733 Max: 1 00:33:32.733 Min: 1 00:33:32.733 Number of Namespaces: 0 00:33:32.733 Compare Command: Not Supported 00:33:32.733 Write Uncorrectable Command: Not Supported 00:33:32.733 Dataset Management Command: Not Supported 00:33:32.733 Write Zeroes Command: Not Supported 00:33:32.733 Set Features Save Field: Not Supported 00:33:32.733 Reservations: Not Supported 00:33:32.733 Timestamp: Not Supported 00:33:32.733 Copy: Not Supported 00:33:32.733 Volatile Write Cache: Not Present 00:33:32.733 Atomic Write Unit (Normal): 1 00:33:32.733 Atomic Write Unit (PFail): 1 00:33:32.733 Atomic Compare & Write Unit: 1 00:33:32.733 Fused Compare & Write: Not Supported 00:33:32.733 Scatter-Gather List 00:33:32.733 SGL Command Set: Supported 00:33:32.733 SGL Keyed: Not Supported 00:33:32.733 SGL Bit Bucket Descriptor: Not Supported 00:33:32.733 SGL Metadata Pointer: Not Supported 00:33:32.733 Oversized SGL: Not Supported 00:33:32.733 SGL Metadata Address: Not Supported 00:33:32.733 SGL Offset: Supported 00:33:32.733 Transport SGL Data Block: Not Supported 00:33:32.733 Replay Protected Memory Block: Not Supported 00:33:32.733 00:33:32.733 Firmware Slot Information 00:33:32.733 ========================= 00:33:32.733 Active slot: 0 00:33:32.733 00:33:32.733 00:33:32.733 Error Log 00:33:32.733 ========= 00:33:32.733 00:33:32.733 Active Namespaces 00:33:32.733 ================= 00:33:32.733 Discovery Log Page 00:33:32.733 ================== 00:33:32.733 Generation Counter: 2 00:33:32.733 Number of Records: 2 00:33:32.733 Record Format: 0 00:33:32.733 00:33:32.733 Discovery Log Entry 0 00:33:32.733 ---------------------- 00:33:32.733 Transport Type: 3 (TCP) 00:33:32.733 Address Family: 1 (IPv4) 00:33:32.733 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:32.733 Entry Flags: 00:33:32.733 Duplicate Returned Information: 0 00:33:32.733 Explicit Persistent Connection Support for Discovery: 0 00:33:32.733 Transport Requirements: 00:33:32.733 Secure Channel: Not Specified 00:33:32.733 Port ID: 1 (0x0001) 00:33:32.733 Controller ID: 65535 (0xffff) 00:33:32.733 Admin Max SQ Size: 32 00:33:32.733 Transport Service Identifier: 4420 00:33:32.733 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:32.733 Transport Address: 10.0.0.1 00:33:32.733 Discovery Log Entry 1 00:33:32.733 ---------------------- 00:33:32.733 Transport Type: 3 (TCP) 00:33:32.733 Address Family: 1 (IPv4) 00:33:32.733 Subsystem Type: 2 (NVM Subsystem) 00:33:32.733 Entry Flags: 00:33:32.733 Duplicate Returned Information: 0 00:33:32.733 Explicit Persistent Connection Support for Discovery: 0 00:33:32.733 Transport Requirements: 00:33:32.733 Secure Channel: Not Specified 00:33:32.733 Port ID: 1 (0x0001) 00:33:32.733 Controller ID: 65535 (0xffff) 00:33:32.733 Admin Max SQ Size: 32 00:33:32.733 Transport Service Identifier: 4420 00:33:32.733 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:32.733 Transport Address: 10.0.0.1 00:33:32.733 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:32.733 get_feature(0x01) failed 00:33:32.733 get_feature(0x02) failed 00:33:32.733 get_feature(0x04) failed 00:33:32.733 ===================================================== 00:33:32.733 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:32.733 ===================================================== 00:33:32.733 Controller Capabilities/Features 00:33:32.733 ================================ 00:33:32.733 Vendor ID: 0000 00:33:32.733 Subsystem Vendor ID: 0000 00:33:32.733 Serial Number: 56e8c445034d9274f995 00:33:32.733 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:32.733 Firmware Version: 6.8.9-20 00:33:32.733 Recommended Arb Burst: 6 00:33:32.733 IEEE OUI Identifier: 00 00 00 00:33:32.733 Multi-path I/O 00:33:32.733 May have multiple subsystem ports: Yes 00:33:32.733 May have multiple controllers: Yes 00:33:32.733 Associated with SR-IOV VF: No 00:33:32.733 Max Data Transfer Size: Unlimited 00:33:32.733 Max Number of Namespaces: 1024 00:33:32.733 Max Number of I/O Queues: 128 00:33:32.733 NVMe Specification Version (VS): 1.3 00:33:32.733 NVMe Specification Version (Identify): 1.3 00:33:32.733 Maximum Queue Entries: 1024 00:33:32.733 Contiguous Queues Required: No 00:33:32.733 Arbitration Mechanisms Supported 00:33:32.733 Weighted Round Robin: Not Supported 00:33:32.733 Vendor Specific: Not Supported 00:33:32.733 Reset Timeout: 7500 ms 00:33:32.733 Doorbell Stride: 4 bytes 00:33:32.733 NVM Subsystem Reset: Not Supported 00:33:32.733 Command Sets Supported 00:33:32.733 NVM Command Set: Supported 00:33:32.733 Boot Partition: Not Supported 00:33:32.733 Memory Page Size Minimum: 4096 bytes 00:33:32.733 Memory Page Size Maximum: 4096 bytes 00:33:32.733 Persistent Memory Region: Not Supported 00:33:32.733 Optional Asynchronous Events Supported 00:33:32.733 Namespace Attribute Notices: Supported 00:33:32.733 Firmware Activation Notices: Not Supported 00:33:32.733 ANA Change Notices: Supported 00:33:32.733 PLE Aggregate Log Change Notices: Not Supported 00:33:32.733 LBA Status Info Alert Notices: Not Supported 00:33:32.733 EGE Aggregate Log Change Notices: Not Supported 00:33:32.734 Normal NVM Subsystem Shutdown event: Not Supported 00:33:32.734 Zone Descriptor Change Notices: Not Supported 00:33:32.734 Discovery Log Change Notices: Not Supported 00:33:32.734 Controller Attributes 00:33:32.734 128-bit Host Identifier: Supported 00:33:32.734 Non-Operational Permissive Mode: Not Supported 00:33:32.734 NVM Sets: Not Supported 00:33:32.734 Read Recovery Levels: Not Supported 00:33:32.734 Endurance Groups: Not Supported 00:33:32.734 Predictable Latency Mode: Not Supported 00:33:32.734 Traffic Based Keep ALive: Supported 00:33:32.734 Namespace Granularity: Not Supported 00:33:32.734 SQ Associations: Not Supported 00:33:32.734 UUID List: Not Supported 00:33:32.734 Multi-Domain Subsystem: Not Supported 00:33:32.734 Fixed Capacity Management: Not Supported 00:33:32.734 Variable Capacity Management: Not Supported 00:33:32.734 Delete Endurance Group: Not Supported 00:33:32.734 Delete NVM Set: Not Supported 00:33:32.734 Extended LBA Formats Supported: Not Supported 00:33:32.734 Flexible Data Placement Supported: Not Supported 00:33:32.734 00:33:32.734 Controller Memory Buffer Support 00:33:32.734 ================================ 00:33:32.734 Supported: No 00:33:32.734 00:33:32.734 Persistent Memory Region Support 00:33:32.734 ================================ 00:33:32.734 Supported: No 00:33:32.734 00:33:32.734 Admin Command Set Attributes 00:33:32.734 ============================ 00:33:32.734 Security Send/Receive: Not Supported 00:33:32.734 Format NVM: Not Supported 00:33:32.734 Firmware Activate/Download: Not Supported 00:33:32.734 Namespace Management: Not Supported 00:33:32.734 Device Self-Test: Not Supported 00:33:32.734 Directives: Not Supported 00:33:32.734 NVMe-MI: Not Supported 00:33:32.734 Virtualization Management: Not Supported 00:33:32.734 Doorbell Buffer Config: Not Supported 00:33:32.734 Get LBA Status Capability: Not Supported 00:33:32.734 Command & Feature Lockdown Capability: Not Supported 00:33:32.734 Abort Command Limit: 4 00:33:32.734 Async Event Request Limit: 4 00:33:32.734 Number of Firmware Slots: N/A 00:33:32.734 Firmware Slot 1 Read-Only: N/A 00:33:32.734 Firmware Activation Without Reset: N/A 00:33:32.734 Multiple Update Detection Support: N/A 00:33:32.734 Firmware Update Granularity: No Information Provided 00:33:32.734 Per-Namespace SMART Log: Yes 00:33:32.734 Asymmetric Namespace Access Log Page: Supported 00:33:32.734 ANA Transition Time : 10 sec 00:33:32.734 00:33:32.734 Asymmetric Namespace Access Capabilities 00:33:32.734 ANA Optimized State : Supported 00:33:32.734 ANA Non-Optimized State : Supported 00:33:32.734 ANA Inaccessible State : Supported 00:33:32.734 ANA Persistent Loss State : Supported 00:33:32.734 ANA Change State : Supported 00:33:32.734 ANAGRPID is not changed : No 00:33:32.734 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:32.734 00:33:32.734 ANA Group Identifier Maximum : 128 00:33:32.734 Number of ANA Group Identifiers : 128 00:33:32.734 Max Number of Allowed Namespaces : 1024 00:33:32.734 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:32.734 Command Effects Log Page: Supported 00:33:32.734 Get Log Page Extended Data: Supported 00:33:32.734 Telemetry Log Pages: Not Supported 00:33:32.734 Persistent Event Log Pages: Not Supported 00:33:32.734 Supported Log Pages Log Page: May Support 00:33:32.734 Commands Supported & Effects Log Page: Not Supported 00:33:32.734 Feature Identifiers & Effects Log Page:May Support 00:33:32.734 NVMe-MI Commands & Effects Log Page: May Support 00:33:32.734 Data Area 4 for Telemetry Log: Not Supported 00:33:32.734 Error Log Page Entries Supported: 128 00:33:32.734 Keep Alive: Supported 00:33:32.734 Keep Alive Granularity: 1000 ms 00:33:32.734 00:33:32.734 NVM Command Set Attributes 00:33:32.734 ========================== 00:33:32.734 Submission Queue Entry Size 00:33:32.734 Max: 64 00:33:32.734 Min: 64 00:33:32.734 Completion Queue Entry Size 00:33:32.734 Max: 16 00:33:32.734 Min: 16 00:33:32.734 Number of Namespaces: 1024 00:33:32.734 Compare Command: Not Supported 00:33:32.734 Write Uncorrectable Command: Not Supported 00:33:32.734 Dataset Management Command: Supported 00:33:32.734 Write Zeroes Command: Supported 00:33:32.734 Set Features Save Field: Not Supported 00:33:32.734 Reservations: Not Supported 00:33:32.734 Timestamp: Not Supported 00:33:32.734 Copy: Not Supported 00:33:32.734 Volatile Write Cache: Present 00:33:32.734 Atomic Write Unit (Normal): 1 00:33:32.734 Atomic Write Unit (PFail): 1 00:33:32.734 Atomic Compare & Write Unit: 1 00:33:32.734 Fused Compare & Write: Not Supported 00:33:32.734 Scatter-Gather List 00:33:32.734 SGL Command Set: Supported 00:33:32.734 SGL Keyed: Not Supported 00:33:32.734 SGL Bit Bucket Descriptor: Not Supported 00:33:32.734 SGL Metadata Pointer: Not Supported 00:33:32.734 Oversized SGL: Not Supported 00:33:32.734 SGL Metadata Address: Not Supported 00:33:32.734 SGL Offset: Supported 00:33:32.734 Transport SGL Data Block: Not Supported 00:33:32.734 Replay Protected Memory Block: Not Supported 00:33:32.734 00:33:32.734 Firmware Slot Information 00:33:32.734 ========================= 00:33:32.734 Active slot: 0 00:33:32.734 00:33:32.734 Asymmetric Namespace Access 00:33:32.734 =========================== 00:33:32.734 Change Count : 0 00:33:32.734 Number of ANA Group Descriptors : 1 00:33:32.734 ANA Group Descriptor : 0 00:33:32.734 ANA Group ID : 1 00:33:32.734 Number of NSID Values : 1 00:33:32.734 Change Count : 0 00:33:32.734 ANA State : 1 00:33:32.734 Namespace Identifier : 1 00:33:32.734 00:33:32.734 Commands Supported and Effects 00:33:32.734 ============================== 00:33:32.734 Admin Commands 00:33:32.734 -------------- 00:33:32.734 Get Log Page (02h): Supported 00:33:32.734 Identify (06h): Supported 00:33:32.734 Abort (08h): Supported 00:33:32.734 Set Features (09h): Supported 00:33:32.734 Get Features (0Ah): Supported 00:33:32.734 Asynchronous Event Request (0Ch): Supported 00:33:32.734 Keep Alive (18h): Supported 00:33:32.734 I/O Commands 00:33:32.734 ------------ 00:33:32.734 Flush (00h): Supported 00:33:32.734 Write (01h): Supported LBA-Change 00:33:32.734 Read (02h): Supported 00:33:32.734 Write Zeroes (08h): Supported LBA-Change 00:33:32.734 Dataset Management (09h): Supported 00:33:32.734 00:33:32.734 Error Log 00:33:32.734 ========= 00:33:32.734 Entry: 0 00:33:32.734 Error Count: 0x3 00:33:32.734 Submission Queue Id: 0x0 00:33:32.734 Command Id: 0x5 00:33:32.734 Phase Bit: 0 00:33:32.734 Status Code: 0x2 00:33:32.734 Status Code Type: 0x0 00:33:32.734 Do Not Retry: 1 00:33:32.734 Error Location: 0x28 00:33:32.734 LBA: 0x0 00:33:32.734 Namespace: 0x0 00:33:32.734 Vendor Log Page: 0x0 00:33:32.734 ----------- 00:33:32.734 Entry: 1 00:33:32.734 Error Count: 0x2 00:33:32.734 Submission Queue Id: 0x0 00:33:32.734 Command Id: 0x5 00:33:32.734 Phase Bit: 0 00:33:32.734 Status Code: 0x2 00:33:32.734 Status Code Type: 0x0 00:33:32.734 Do Not Retry: 1 00:33:32.734 Error Location: 0x28 00:33:32.734 LBA: 0x0 00:33:32.734 Namespace: 0x0 00:33:32.734 Vendor Log Page: 0x0 00:33:32.734 ----------- 00:33:32.734 Entry: 2 00:33:32.734 Error Count: 0x1 00:33:32.734 Submission Queue Id: 0x0 00:33:32.734 Command Id: 0x4 00:33:32.734 Phase Bit: 0 00:33:32.734 Status Code: 0x2 00:33:32.734 Status Code Type: 0x0 00:33:32.734 Do Not Retry: 1 00:33:32.734 Error Location: 0x28 00:33:32.734 LBA: 0x0 00:33:32.734 Namespace: 0x0 00:33:32.734 Vendor Log Page: 0x0 00:33:32.734 00:33:32.734 Number of Queues 00:33:32.734 ================ 00:33:32.734 Number of I/O Submission Queues: 128 00:33:32.734 Number of I/O Completion Queues: 128 00:33:32.734 00:33:32.734 ZNS Specific Controller Data 00:33:32.734 ============================ 00:33:32.734 Zone Append Size Limit: 0 00:33:32.734 00:33:32.734 00:33:32.734 Active Namespaces 00:33:32.734 ================= 00:33:32.734 get_feature(0x05) failed 00:33:32.734 Namespace ID:1 00:33:32.734 Command Set Identifier: NVM (00h) 00:33:32.734 Deallocate: Supported 00:33:32.734 Deallocated/Unwritten Error: Not Supported 00:33:32.734 Deallocated Read Value: Unknown 00:33:32.734 Deallocate in Write Zeroes: Not Supported 00:33:32.734 Deallocated Guard Field: 0xFFFF 00:33:32.734 Flush: Supported 00:33:32.734 Reservation: Not Supported 00:33:32.734 Namespace Sharing Capabilities: Multiple Controllers 00:33:32.734 Size (in LBAs): 1953525168 (931GiB) 00:33:32.734 Capacity (in LBAs): 1953525168 (931GiB) 00:33:32.735 Utilization (in LBAs): 1953525168 (931GiB) 00:33:32.735 UUID: ea6eb23a-0d57-4092-bd9d-a1e146f4dd32 00:33:32.735 Thin Provisioning: Not Supported 00:33:32.735 Per-NS Atomic Units: Yes 00:33:32.735 Atomic Boundary Size (Normal): 0 00:33:32.735 Atomic Boundary Size (PFail): 0 00:33:32.735 Atomic Boundary Offset: 0 00:33:32.735 NGUID/EUI64 Never Reused: No 00:33:32.735 ANA group ID: 1 00:33:32.735 Namespace Write Protected: No 00:33:32.735 Number of LBA Formats: 1 00:33:32.735 Current LBA Format: LBA Format #00 00:33:32.735 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:32.735 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:32.735 rmmod nvme_tcp 00:33:32.735 rmmod nvme_fabrics 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.735 18:54:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.268 18:54:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:35.268 18:54:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:35.268 18:54:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:35.268 18:54:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:33:35.268 18:54:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:35.268 18:54:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:35.268 18:54:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:35.268 18:54:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:35.268 18:54:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:35.268 18:54:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:35.268 18:54:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:36.204 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:36.204 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:36.204 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:36.204 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:36.204 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:36.204 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:36.204 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:36.204 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:36.204 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:36.204 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:36.204 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:36.204 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:36.204 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:36.204 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:36.205 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:36.205 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:37.141 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:37.399 00:33:37.399 real 0m10.099s 00:33:37.399 user 0m2.337s 00:33:37.399 sys 0m3.732s 00:33:37.399 18:54:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:37.399 18:54:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:37.399 ************************************ 00:33:37.399 END TEST nvmf_identify_kernel_target 00:33:37.399 ************************************ 00:33:37.399 18:54:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:37.399 18:54:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:37.399 18:54:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:37.399 18:54:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.399 ************************************ 00:33:37.399 START TEST nvmf_auth_host 00:33:37.399 ************************************ 00:33:37.399 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:37.399 * Looking for test storage... 00:33:37.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:37.399 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:37.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.400 --rc genhtml_branch_coverage=1 00:33:37.400 --rc genhtml_function_coverage=1 00:33:37.400 --rc genhtml_legend=1 00:33:37.400 --rc geninfo_all_blocks=1 00:33:37.400 --rc geninfo_unexecuted_blocks=1 00:33:37.400 00:33:37.400 ' 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:37.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.400 --rc genhtml_branch_coverage=1 00:33:37.400 --rc genhtml_function_coverage=1 00:33:37.400 --rc genhtml_legend=1 00:33:37.400 --rc geninfo_all_blocks=1 00:33:37.400 --rc geninfo_unexecuted_blocks=1 00:33:37.400 00:33:37.400 ' 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:37.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.400 --rc genhtml_branch_coverage=1 00:33:37.400 --rc genhtml_function_coverage=1 00:33:37.400 --rc genhtml_legend=1 00:33:37.400 --rc geninfo_all_blocks=1 00:33:37.400 --rc geninfo_unexecuted_blocks=1 00:33:37.400 00:33:37.400 ' 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:37.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.400 --rc genhtml_branch_coverage=1 00:33:37.400 --rc genhtml_function_coverage=1 00:33:37.400 --rc genhtml_legend=1 00:33:37.400 --rc geninfo_all_blocks=1 00:33:37.400 --rc geninfo_unexecuted_blocks=1 00:33:37.400 00:33:37.400 ' 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.400 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:37.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:37.659 18:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:39.560 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:39.561 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:39.561 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:39.561 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:39.561 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.561 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:39.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:33:39.820 00:33:39.820 --- 10.0.0.2 ping statistics --- 00:33:39.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.820 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:33:39.820 00:33:39.820 --- 10.0.0.1 ping statistics --- 00:33:39.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.820 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=874248 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 874248 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 874248 ']' 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.820 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=70149cc6e392e20df049f790031c6b93 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.s6e 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 70149cc6e392e20df049f790031c6b93 0 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 70149cc6e392e20df049f790031c6b93 0 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=70149cc6e392e20df049f790031c6b93 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.s6e 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.s6e 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.s6e 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6f229612e4b4687177400018c3032cb04acea12ec0f6c4bd4c7183410079a74d 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7Xh 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6f229612e4b4687177400018c3032cb04acea12ec0f6c4bd4c7183410079a74d 3 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6f229612e4b4687177400018c3032cb04acea12ec0f6c4bd4c7183410079a74d 3 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6f229612e4b4687177400018c3032cb04acea12ec0f6c4bd4c7183410079a74d 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7Xh 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7Xh 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7Xh 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=29872e6a5e4dbd767c694c213c6af25693541a25de8fe96f 00:33:40.079 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2u5 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 29872e6a5e4dbd767c694c213c6af25693541a25de8fe96f 0 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 29872e6a5e4dbd767c694c213c6af25693541a25de8fe96f 0 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=29872e6a5e4dbd767c694c213c6af25693541a25de8fe96f 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2u5 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2u5 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.2u5 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=89c00504187b8cbb0b0d4f9db76635c038381ca30e3d8ec0 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yO9 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 89c00504187b8cbb0b0d4f9db76635c038381ca30e3d8ec0 2 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 89c00504187b8cbb0b0d4f9db76635c038381ca30e3d8ec0 2 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=89c00504187b8cbb0b0d4f9db76635c038381ca30e3d8ec0 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yO9 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yO9 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.yO9 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3de1b903d0885f4827bb3af7cb2df670 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:40.337 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cwq 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3de1b903d0885f4827bb3af7cb2df670 1 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3de1b903d0885f4827bb3af7cb2df670 1 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3de1b903d0885f4827bb3af7cb2df670 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cwq 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cwq 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.cwq 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cb79fa613f0de5dee8520b0a99b76f65 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VQB 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cb79fa613f0de5dee8520b0a99b76f65 1 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cb79fa613f0de5dee8520b0a99b76f65 1 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cb79fa613f0de5dee8520b0a99b76f65 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VQB 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VQB 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.VQB 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=28aeb477a676cf1507fb6fd6f5ffa99b0575d273fec49faa 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.GJT 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 28aeb477a676cf1507fb6fd6f5ffa99b0575d273fec49faa 2 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 28aeb477a676cf1507fb6fd6f5ffa99b0575d273fec49faa 2 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=28aeb477a676cf1507fb6fd6f5ffa99b0575d273fec49faa 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:33:40.338 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.GJT 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.GJT 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.GJT 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3e9b77b6960f8f87ab6762816d9f32d2 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dRK 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3e9b77b6960f8f87ab6762816d9f32d2 0 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3e9b77b6960f8f87ab6762816d9f32d2 0 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3e9b77b6960f8f87ab6762816d9f32d2 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dRK 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dRK 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.dRK 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2354b26261d7861642f016f4c59eee8ffd8245fa600b67f36660bc7075ebd8ba 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.h7l 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2354b26261d7861642f016f4c59eee8ffd8245fa600b67f36660bc7075ebd8ba 3 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2354b26261d7861642f016f4c59eee8ffd8245fa600b67f36660bc7075ebd8ba 3 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2354b26261d7861642f016f4c59eee8ffd8245fa600b67f36660bc7075ebd8ba 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:33:40.596 18:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:33:40.596 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.h7l 00:33:40.596 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.h7l 00:33:40.596 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.h7l 00:33:40.596 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:40.596 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 874248 00:33:40.596 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 874248 ']' 00:33:40.596 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.596 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:40.596 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.596 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:40.596 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.s6e 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7Xh ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Xh 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.2u5 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.yO9 ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yO9 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.cwq 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.VQB ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VQB 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.GJT 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.dRK ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.dRK 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.h7l 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:40.855 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:40.856 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:40.856 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:33:40.856 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:40.856 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:40.856 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:40.856 18:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:42.230 Waiting for block devices as requested 00:33:42.230 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:42.230 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:42.488 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:42.488 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:42.746 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:42.746 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:42.746 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:42.746 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:43.004 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:43.004 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:43.004 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:43.004 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:43.262 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:43.262 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:43.262 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:43.262 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:43.520 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:43.778 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:43.778 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:43.778 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:43.778 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:43.778 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:43.778 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:43.778 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:43.778 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:43.778 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:44.037 No valid GPT data, bailing 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:44.037 00:33:44.037 Discovery Log Number of Records 2, Generation counter 2 00:33:44.037 =====Discovery Log Entry 0====== 00:33:44.037 trtype: tcp 00:33:44.037 adrfam: ipv4 00:33:44.037 subtype: current discovery subsystem 00:33:44.037 treq: not specified, sq flow control disable supported 00:33:44.037 portid: 1 00:33:44.037 trsvcid: 4420 00:33:44.037 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:44.037 traddr: 10.0.0.1 00:33:44.037 eflags: none 00:33:44.037 sectype: none 00:33:44.037 =====Discovery Log Entry 1====== 00:33:44.037 trtype: tcp 00:33:44.037 adrfam: ipv4 00:33:44.037 subtype: nvme subsystem 00:33:44.037 treq: not specified, sq flow control disable supported 00:33:44.037 portid: 1 00:33:44.037 trsvcid: 4420 00:33:44.037 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:44.037 traddr: 10.0.0.1 00:33:44.037 eflags: none 00:33:44.037 sectype: none 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.037 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.038 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.296 nvme0n1 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.296 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.553 nvme0n1 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.553 18:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.811 nvme0n1 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:44.811 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.812 nvme0n1 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.812 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:33:45.070 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.071 nvme0n1 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.071 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:45.330 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.331 nvme0n1 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.331 18:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.590 nvme0n1 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.590 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.849 nvme0n1 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.849 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.107 nvme0n1 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.107 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.368 nvme0n1 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.368 18:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.627 nvme0n1 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.627 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.628 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.886 nvme0n1 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.886 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.144 nvme0n1 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.402 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.403 18:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.661 nvme0n1 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:47.661 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.662 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.920 nvme0n1 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.920 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.178 nvme0n1 00:33:48.178 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.178 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:48.178 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:48.178 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.178 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.178 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:33:48.436 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.437 18:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.002 nvme0n1 00:33:49.002 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.002 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.003 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.569 nvme0n1 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.569 18:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.133 nvme0n1 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.133 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.698 nvme0n1 00:33:50.698 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.698 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.698 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.698 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.698 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.699 18:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.699 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.957 nvme0n1 00:33:50.957 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.957 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:50.957 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.957 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.957 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:50.957 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.215 18:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.148 nvme0n1 00:33:52.148 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.148 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:52.148 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:52.148 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.148 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.148 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.148 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.148 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:52.148 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.148 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.148 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.149 18:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.083 nvme0n1 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.083 18:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.017 nvme0n1 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:54.017 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.018 18:54:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.951 nvme0n1 00:33:54.951 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.951 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:54.951 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.952 18:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.885 nvme0n1 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.885 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.886 nvme0n1 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.886 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.145 nvme0n1 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.145 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.403 nvme0n1 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:56.403 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.404 18:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.662 nvme0n1 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:56.662 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.663 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.922 nvme0n1 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.922 nvme0n1 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.922 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.180 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.180 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.180 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.180 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.180 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.180 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.180 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.180 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.181 nvme0n1 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.181 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.439 nvme0n1 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.439 18:54:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.439 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:57.697 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.698 nvme0n1 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.698 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.956 nvme0n1 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.956 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.214 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.214 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.215 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.474 nvme0n1 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.474 18:54:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.734 nvme0n1 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.734 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.993 nvme0n1 00:33:58.993 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.993 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.993 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.993 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.993 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.993 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.254 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.254 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.254 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.254 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.254 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.255 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.533 nvme0n1 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.533 18:54:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.808 nvme0n1 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.808 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.381 nvme0n1 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.381 18:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.947 nvme0n1 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.947 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.512 nvme0n1 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.512 18:54:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.078 nvme0n1 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.078 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.644 nvme0n1 00:34:02.644 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.644 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.644 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.644 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.644 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.644 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.644 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.644 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.644 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.644 18:54:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.644 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.578 nvme0n1 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:03.578 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.579 18:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.512 nvme0n1 00:34:04.512 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.512 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.512 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.513 18:54:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.449 nvme0n1 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.449 18:54:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.384 nvme0n1 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.384 18:54:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.949 nvme0n1 00:34:06.949 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.949 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.949 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.949 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.949 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.949 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.950 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.950 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.950 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.950 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.208 nvme0n1 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.208 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.466 nvme0n1 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.466 18:54:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.724 nvme0n1 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.725 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.982 nvme0n1 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.983 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.241 nvme0n1 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.242 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.500 nvme0n1 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.500 18:54:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.759 nvme0n1 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.759 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.018 nvme0n1 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.018 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.276 nvme0n1 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.276 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.277 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.536 nvme0n1 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.536 18:54:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.795 nvme0n1 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.795 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.053 nvme0n1 00:34:10.053 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.053 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.053 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.053 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.053 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.053 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.053 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.053 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.053 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.053 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.311 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.570 nvme0n1 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.570 18:54:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.827 nvme0n1 00:34:10.827 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.827 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.827 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.827 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.827 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.827 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.827 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.827 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.827 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.827 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.827 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.828 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.086 nvme0n1 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.086 18:54:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.652 nvme0n1 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.652 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.653 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.218 nvme0n1 00:34:12.218 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.218 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.218 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.218 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.218 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.218 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.219 18:54:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.784 nvme0n1 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.784 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.349 nvme0n1 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:13.349 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.350 18:54:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.918 nvme0n1 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAxNDljYzZlMzkyZTIwZGYwNDlmNzkwMDMxYzZiOTMAUPN9: 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: ]] 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmYyMjk2MTJlNGI0Njg3MTc3NDAwMDE4YzMwMzJjYjA0YWNlYTEyZWMwZjZjNGJkNGM3MTgzNDEwMDc5YTc0ZIHg+Jk=: 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.918 18:55:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.865 nvme0n1 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.865 18:55:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.798 nvme0n1 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:15.798 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.799 18:55:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.732 nvme0n1 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjhhZWI0NzdhNjc2Y2YxNTA3ZmI2ZmQ2ZjVmZmE5OWIwNTc1ZDI3M2ZlYzQ5ZmFhvpJErA==: 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: ]] 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2U5Yjc3YjY5NjBmOGY4N2FiNjc2MjgxNmQ5ZjMyZDJSuyEB: 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.732 18:55:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.667 nvme0n1 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjM1NGIyNjI2MWQ3ODYxNjQyZjAxNmY0YzU5ZWVlOGZmZDgyNDVmYTYwMGI2N2YzNjY2MGJjNzA3NWViZDhiYd0f/hs=: 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.667 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.602 nvme0n1 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.602 18:55:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.602 request: 00:34:18.602 { 00:34:18.602 "name": "nvme0", 00:34:18.602 "trtype": "tcp", 00:34:18.602 "traddr": "10.0.0.1", 00:34:18.602 "adrfam": "ipv4", 00:34:18.602 "trsvcid": "4420", 00:34:18.602 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:18.602 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:18.602 "prchk_reftag": false, 00:34:18.602 "prchk_guard": false, 00:34:18.602 "hdgst": false, 00:34:18.602 "ddgst": false, 00:34:18.602 "allow_unrecognized_csi": false, 00:34:18.602 "method": "bdev_nvme_attach_controller", 00:34:18.602 "req_id": 1 00:34:18.602 } 00:34:18.602 Got JSON-RPC error response 00:34:18.602 response: 00:34:18.602 { 00:34:18.602 "code": -5, 00:34:18.602 "message": "Input/output error" 00:34:18.602 } 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:18.602 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.603 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:18.603 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.603 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:18.603 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.603 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.861 request: 00:34:18.861 { 00:34:18.861 "name": "nvme0", 00:34:18.861 "trtype": "tcp", 00:34:18.861 "traddr": "10.0.0.1", 00:34:18.861 "adrfam": "ipv4", 00:34:18.861 "trsvcid": "4420", 00:34:18.861 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:18.861 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:18.861 "prchk_reftag": false, 00:34:18.861 "prchk_guard": false, 00:34:18.861 "hdgst": false, 00:34:18.861 "ddgst": false, 00:34:18.861 "dhchap_key": "key2", 00:34:18.861 "allow_unrecognized_csi": false, 00:34:18.861 "method": "bdev_nvme_attach_controller", 00:34:18.861 "req_id": 1 00:34:18.861 } 00:34:18.861 Got JSON-RPC error response 00:34:18.861 response: 00:34:18.861 { 00:34:18.861 "code": -5, 00:34:18.861 "message": "Input/output error" 00:34:18.862 } 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.862 request: 00:34:18.862 { 00:34:18.862 "name": "nvme0", 00:34:18.862 "trtype": "tcp", 00:34:18.862 "traddr": "10.0.0.1", 00:34:18.862 "adrfam": "ipv4", 00:34:18.862 "trsvcid": "4420", 00:34:18.862 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:18.862 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:18.862 "prchk_reftag": false, 00:34:18.862 "prchk_guard": false, 00:34:18.862 "hdgst": false, 00:34:18.862 "ddgst": false, 00:34:18.862 "dhchap_key": "key1", 00:34:18.862 "dhchap_ctrlr_key": "ckey2", 00:34:18.862 "allow_unrecognized_csi": false, 00:34:18.862 "method": "bdev_nvme_attach_controller", 00:34:18.862 "req_id": 1 00:34:18.862 } 00:34:18.862 Got JSON-RPC error response 00:34:18.862 response: 00:34:18.862 { 00:34:18.862 "code": -5, 00:34:18.862 "message": "Input/output error" 00:34:18.862 } 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.862 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.120 nvme0n1 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.120 request: 00:34:19.120 { 00:34:19.120 "name": "nvme0", 00:34:19.120 "dhchap_key": "key1", 00:34:19.120 "dhchap_ctrlr_key": "ckey2", 00:34:19.120 "method": "bdev_nvme_set_keys", 00:34:19.120 "req_id": 1 00:34:19.120 } 00:34:19.120 Got JSON-RPC error response 00:34:19.120 response: 00:34:19.120 { 00:34:19.120 "code": -13, 00:34:19.120 "message": "Permission denied" 00:34:19.120 } 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:19.120 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.377 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.377 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:19.377 18:55:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk4NzJlNmE1ZTRkYmQ3NjdjNjk0YzIxM2M2YWYyNTY5MzU0MWEyNWRlOGZlOTZmbSjqDA==: 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: ]] 00:34:20.311 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODljMDA1MDQxODdiOGNiYjBiMGQ0ZjlkYjc2NjM1YzAzODM4MWNhMzBlM2Q4ZWMwYMEWfQ==: 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.312 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.570 nvme0n1 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2RlMWI5MDNkMDg4NWY0ODI3YmIzYWY3Y2IyZGY2NzCK1RTX: 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: ]] 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3OWZhNjEzZjBkZTVkZWU4NTIwYjBhOTliNzZmNjUptNTN: 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.570 request: 00:34:20.570 { 00:34:20.570 "name": "nvme0", 00:34:20.570 "dhchap_key": "key2", 00:34:20.570 "dhchap_ctrlr_key": "ckey1", 00:34:20.570 "method": "bdev_nvme_set_keys", 00:34:20.570 "req_id": 1 00:34:20.570 } 00:34:20.570 Got JSON-RPC error response 00:34:20.570 response: 00:34:20.570 { 00:34:20.570 "code": -13, 00:34:20.570 "message": "Permission denied" 00:34:20.570 } 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.570 18:55:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.570 18:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.570 18:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:20.570 18:55:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:21.504 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:21.504 rmmod nvme_tcp 00:34:21.763 rmmod nvme_fabrics 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 874248 ']' 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 874248 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 874248 ']' 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 874248 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 874248 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 874248' 00:34:21.763 killing process with pid 874248 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 874248 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 874248 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:21.763 18:55:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:24.302 18:55:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:25.239 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:25.239 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:25.239 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:25.239 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:25.239 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:25.239 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:25.239 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:25.239 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:25.239 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:25.239 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:25.239 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:25.239 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:25.239 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:25.239 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:25.239 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:25.239 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:26.177 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:26.436 18:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.s6e /tmp/spdk.key-null.2u5 /tmp/spdk.key-sha256.cwq /tmp/spdk.key-sha384.GJT /tmp/spdk.key-sha512.h7l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:26.436 18:55:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:27.370 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:27.370 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:27.370 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:27.370 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:27.370 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:27.370 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:27.370 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:27.630 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:27.630 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:27.630 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:27.630 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:27.630 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:27.630 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:27.630 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:27.630 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:27.630 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:27.630 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:27.630 00:34:27.630 real 0m50.315s 00:34:27.630 user 0m48.047s 00:34:27.630 sys 0m6.295s 00:34:27.630 18:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.630 18:55:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.630 ************************************ 00:34:27.630 END TEST nvmf_auth_host 00:34:27.630 ************************************ 00:34:27.630 18:55:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:27.630 18:55:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:27.630 18:55:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:27.630 18:55:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:27.630 18:55:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.630 ************************************ 00:34:27.630 START TEST nvmf_digest 00:34:27.630 ************************************ 00:34:27.630 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:27.889 * Looking for test storage... 00:34:27.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:27.889 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:27.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.890 --rc genhtml_branch_coverage=1 00:34:27.890 --rc genhtml_function_coverage=1 00:34:27.890 --rc genhtml_legend=1 00:34:27.890 --rc geninfo_all_blocks=1 00:34:27.890 --rc geninfo_unexecuted_blocks=1 00:34:27.890 00:34:27.890 ' 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:27.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.890 --rc genhtml_branch_coverage=1 00:34:27.890 --rc genhtml_function_coverage=1 00:34:27.890 --rc genhtml_legend=1 00:34:27.890 --rc geninfo_all_blocks=1 00:34:27.890 --rc geninfo_unexecuted_blocks=1 00:34:27.890 00:34:27.890 ' 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:27.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.890 --rc genhtml_branch_coverage=1 00:34:27.890 --rc genhtml_function_coverage=1 00:34:27.890 --rc genhtml_legend=1 00:34:27.890 --rc geninfo_all_blocks=1 00:34:27.890 --rc geninfo_unexecuted_blocks=1 00:34:27.890 00:34:27.890 ' 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:27.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.890 --rc genhtml_branch_coverage=1 00:34:27.890 --rc genhtml_function_coverage=1 00:34:27.890 --rc genhtml_legend=1 00:34:27.890 --rc geninfo_all_blocks=1 00:34:27.890 --rc geninfo_unexecuted_blocks=1 00:34:27.890 00:34:27.890 ' 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:27.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.890 18:55:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.420 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:30.421 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:30.421 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:30.421 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:30.421 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:30.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:34:30.421 00:34:30.421 --- 10.0.0.2 ping statistics --- 00:34:30.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.421 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:34:30.421 00:34:30.421 --- 10.0.0.1 ping statistics --- 00:34:30.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.421 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:30.421 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:30.422 ************************************ 00:34:30.422 START TEST nvmf_digest_clean 00:34:30.422 ************************************ 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=883723 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 883723 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 883723 ']' 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:30.422 18:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.422 [2024-11-17 18:55:16.829988] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:30.422 [2024-11-17 18:55:16.830079] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.422 [2024-11-17 18:55:16.902034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.422 [2024-11-17 18:55:16.947123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.422 [2024-11-17 18:55:16.947173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.422 [2024-11-17 18:55:16.947186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.422 [2024-11-17 18:55:16.947211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.422 [2024-11-17 18:55:16.947221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.422 [2024-11-17 18:55:16.947787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.681 null0 00:34:30.681 [2024-11-17 18:55:17.177315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.681 [2024-11-17 18:55:17.201512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=883742 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 883742 /var/tmp/bperf.sock 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 883742 ']' 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:30.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:30.681 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:30.681 [2024-11-17 18:55:17.252771] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:30.681 [2024-11-17 18:55:17.252849] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883742 ] 00:34:30.940 [2024-11-17 18:55:17.326202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.940 [2024-11-17 18:55:17.375002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.940 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:30.940 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:30.940 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:30.940 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:30.940 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:31.560 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:31.560 18:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:31.817 nvme0n1 00:34:31.817 18:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:31.817 18:55:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:32.074 Running I/O for 2 seconds... 00:34:33.939 18722.00 IOPS, 73.13 MiB/s [2024-11-17T17:55:20.516Z] 18945.50 IOPS, 74.01 MiB/s 00:34:33.940 Latency(us) 00:34:33.940 [2024-11-17T17:55:20.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.940 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:33.940 nvme0n1 : 2.04 18593.96 72.63 0.00 0.00 6741.94 3422.44 45632.47 00:34:33.940 [2024-11-17T17:55:20.516Z] =================================================================================================================== 00:34:33.940 [2024-11-17T17:55:20.516Z] Total : 18593.96 72.63 0.00 0.00 6741.94 3422.44 45632.47 00:34:33.940 { 00:34:33.940 "results": [ 00:34:33.940 { 00:34:33.940 "job": "nvme0n1", 00:34:33.940 "core_mask": "0x2", 00:34:33.940 "workload": "randread", 00:34:33.940 "status": "finished", 00:34:33.940 "queue_depth": 128, 00:34:33.940 "io_size": 4096, 00:34:33.940 "runtime": 2.044696, 00:34:33.940 "iops": 18593.962134224355, 00:34:33.940 "mibps": 72.63266458681389, 00:34:33.940 "io_failed": 0, 00:34:33.940 "io_timeout": 0, 00:34:33.940 "avg_latency_us": 6741.939076777401, 00:34:33.940 "min_latency_us": 3422.4355555555558, 00:34:33.940 "max_latency_us": 45632.474074074074 00:34:33.940 } 00:34:33.940 ], 00:34:33.940 "core_count": 1 00:34:33.940 } 00:34:33.940 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:34.198 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:34.198 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:34.198 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:34.198 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:34.198 | select(.opcode=="crc32c") 00:34:34.198 | "\(.module_name) \(.executed)"' 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 883742 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 883742 ']' 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 883742 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883742 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883742' 00:34:34.456 killing process with pid 883742 00:34:34.456 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 883742 00:34:34.456 Received shutdown signal, test time was about 2.000000 seconds 00:34:34.456 00:34:34.456 Latency(us) 00:34:34.456 [2024-11-17T17:55:21.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.456 [2024-11-17T17:55:21.032Z] =================================================================================================================== 00:34:34.457 [2024-11-17T17:55:21.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:34.457 18:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 883742 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=884272 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 884272 /var/tmp/bperf.sock 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 884272 ']' 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:34.457 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.714 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:34.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:34.715 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.715 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:34.715 [2024-11-17 18:55:21.078294] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:34.715 [2024-11-17 18:55:21.078389] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884272 ] 00:34:34.715 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:34.715 Zero copy mechanism will not be used. 00:34:34.715 [2024-11-17 18:55:21.146254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.715 [2024-11-17 18:55:21.195165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.972 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:34.972 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:34.972 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:34.972 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:34.972 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:35.231 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:35.231 18:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:35.489 nvme0n1 00:34:35.489 18:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:35.489 18:55:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:35.746 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:35.747 Zero copy mechanism will not be used. 00:34:35.747 Running I/O for 2 seconds... 00:34:37.612 6483.00 IOPS, 810.38 MiB/s [2024-11-17T17:55:24.188Z] 6207.50 IOPS, 775.94 MiB/s 00:34:37.612 Latency(us) 00:34:37.612 [2024-11-17T17:55:24.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.612 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:37.612 nvme0n1 : 2.00 6209.37 776.17 0.00 0.00 2571.95 719.08 4708.88 00:34:37.612 [2024-11-17T17:55:24.188Z] =================================================================================================================== 00:34:37.612 [2024-11-17T17:55:24.188Z] Total : 6209.37 776.17 0.00 0.00 2571.95 719.08 4708.88 00:34:37.612 { 00:34:37.612 "results": [ 00:34:37.612 { 00:34:37.612 "job": "nvme0n1", 00:34:37.612 "core_mask": "0x2", 00:34:37.612 "workload": "randread", 00:34:37.612 "status": "finished", 00:34:37.612 "queue_depth": 16, 00:34:37.612 "io_size": 131072, 00:34:37.612 "runtime": 2.004068, 00:34:37.612 "iops": 6209.370141132936, 00:34:37.612 "mibps": 776.171267641617, 00:34:37.612 "io_failed": 0, 00:34:37.612 "io_timeout": 0, 00:34:37.612 "avg_latency_us": 2571.9491658035404, 00:34:37.612 "min_latency_us": 719.0755555555555, 00:34:37.612 "max_latency_us": 4708.882962962963 00:34:37.612 } 00:34:37.612 ], 00:34:37.612 "core_count": 1 00:34:37.612 } 00:34:37.612 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:37.612 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:37.612 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:37.612 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:37.612 | select(.opcode=="crc32c") 00:34:37.613 | "\(.module_name) \(.executed)"' 00:34:37.613 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:37.870 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:37.870 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:37.870 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:37.870 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:37.870 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 884272 00:34:37.870 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 884272 ']' 00:34:37.870 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 884272 00:34:37.870 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:37.870 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:37.870 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 884272 00:34:38.128 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:38.128 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:38.128 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 884272' 00:34:38.128 killing process with pid 884272 00:34:38.128 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 884272 00:34:38.128 Received shutdown signal, test time was about 2.000000 seconds 00:34:38.128 00:34:38.128 Latency(us) 00:34:38.128 [2024-11-17T17:55:24.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:38.129 [2024-11-17T17:55:24.705Z] =================================================================================================================== 00:34:38.129 [2024-11-17T17:55:24.705Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 884272 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=884684 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 884684 /var/tmp/bperf.sock 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 884684 ']' 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:38.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.129 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:38.129 [2024-11-17 18:55:24.691437] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:38.129 [2024-11-17 18:55:24.691537] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884684 ] 00:34:38.388 [2024-11-17 18:55:24.759063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.388 [2024-11-17 18:55:24.801561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.388 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:38.388 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:38.388 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:38.388 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:38.388 18:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:38.953 18:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:38.953 18:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:39.214 nvme0n1 00:34:39.214 18:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:39.214 18:55:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:39.214 Running I/O for 2 seconds... 00:34:41.518 19609.00 IOPS, 76.60 MiB/s [2024-11-17T17:55:28.094Z] 19140.50 IOPS, 74.77 MiB/s 00:34:41.518 Latency(us) 00:34:41.518 [2024-11-17T17:55:28.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.518 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:41.518 nvme0n1 : 2.01 19140.62 74.77 0.00 0.00 6672.01 2524.35 9272.13 00:34:41.518 [2024-11-17T17:55:28.094Z] =================================================================================================================== 00:34:41.518 [2024-11-17T17:55:28.094Z] Total : 19140.62 74.77 0.00 0.00 6672.01 2524.35 9272.13 00:34:41.518 { 00:34:41.518 "results": [ 00:34:41.518 { 00:34:41.518 "job": "nvme0n1", 00:34:41.518 "core_mask": "0x2", 00:34:41.518 "workload": "randwrite", 00:34:41.518 "status": "finished", 00:34:41.518 "queue_depth": 128, 00:34:41.518 "io_size": 4096, 00:34:41.518 "runtime": 2.008347, 00:34:41.518 "iops": 19140.6166364677, 00:34:41.518 "mibps": 74.76803373620196, 00:34:41.518 "io_failed": 0, 00:34:41.518 "io_timeout": 0, 00:34:41.518 "avg_latency_us": 6672.00947445195, 00:34:41.518 "min_latency_us": 2524.34962962963, 00:34:41.518 "max_latency_us": 9272.13037037037 00:34:41.518 } 00:34:41.518 ], 00:34:41.518 "core_count": 1 00:34:41.518 } 00:34:41.518 18:55:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:41.518 18:55:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:41.518 18:55:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:41.518 18:55:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:41.518 | select(.opcode=="crc32c") 00:34:41.518 | "\(.module_name) \(.executed)"' 00:34:41.518 18:55:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:41.518 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:41.518 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:41.518 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:41.518 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:41.518 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 884684 00:34:41.518 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 884684 ']' 00:34:41.518 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 884684 00:34:41.518 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:41.518 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.518 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 884684 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 884684' 00:34:41.777 killing process with pid 884684 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 884684 00:34:41.777 Received shutdown signal, test time was about 2.000000 seconds 00:34:41.777 00:34:41.777 Latency(us) 00:34:41.777 [2024-11-17T17:55:28.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.777 [2024-11-17T17:55:28.353Z] =================================================================================================================== 00:34:41.777 [2024-11-17T17:55:28.353Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 884684 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=885087 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 885087 /var/tmp/bperf.sock 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 885087 ']' 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:41.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.777 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:42.036 [2024-11-17 18:55:28.359700] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:42.036 [2024-11-17 18:55:28.359799] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885087 ] 00:34:42.036 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:42.036 Zero copy mechanism will not be used. 00:34:42.036 [2024-11-17 18:55:28.427401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.036 [2024-11-17 18:55:28.473483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.036 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.036 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:42.036 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:42.036 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:42.036 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:42.602 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.602 18:55:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.859 nvme0n1 00:34:42.859 18:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:42.859 18:55:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:42.859 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:42.859 Zero copy mechanism will not be used. 00:34:42.859 Running I/O for 2 seconds... 00:34:45.161 6036.00 IOPS, 754.50 MiB/s [2024-11-17T17:55:31.737Z] 6213.00 IOPS, 776.62 MiB/s 00:34:45.161 Latency(us) 00:34:45.161 [2024-11-17T17:55:31.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.161 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:45.161 nvme0n1 : 2.00 6211.46 776.43 0.00 0.00 2569.20 1614.13 10097.40 00:34:45.161 [2024-11-17T17:55:31.737Z] =================================================================================================================== 00:34:45.161 [2024-11-17T17:55:31.737Z] Total : 6211.46 776.43 0.00 0.00 2569.20 1614.13 10097.40 00:34:45.161 { 00:34:45.161 "results": [ 00:34:45.161 { 00:34:45.161 "job": "nvme0n1", 00:34:45.161 "core_mask": "0x2", 00:34:45.161 "workload": "randwrite", 00:34:45.161 "status": "finished", 00:34:45.161 "queue_depth": 16, 00:34:45.161 "io_size": 131072, 00:34:45.161 "runtime": 2.003717, 00:34:45.161 "iops": 6211.45600900726, 00:34:45.161 "mibps": 776.4320011259075, 00:34:45.161 "io_failed": 0, 00:34:45.161 "io_timeout": 0, 00:34:45.161 "avg_latency_us": 2569.199611953268, 00:34:45.161 "min_latency_us": 1614.1274074074074, 00:34:45.161 "max_latency_us": 10097.39851851852 00:34:45.161 } 00:34:45.161 ], 00:34:45.161 "core_count": 1 00:34:45.161 } 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:45.161 | select(.opcode=="crc32c") 00:34:45.161 | "\(.module_name) \(.executed)"' 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 885087 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 885087 ']' 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 885087 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:45.161 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885087 00:34:45.419 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:45.419 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:45.419 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885087' 00:34:45.419 killing process with pid 885087 00:34:45.419 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 885087 00:34:45.419 Received shutdown signal, test time was about 2.000000 seconds 00:34:45.419 00:34:45.419 Latency(us) 00:34:45.419 [2024-11-17T17:55:31.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.419 [2024-11-17T17:55:31.995Z] =================================================================================================================== 00:34:45.419 [2024-11-17T17:55:31.995Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:45.419 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 885087 00:34:45.419 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 883723 00:34:45.419 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 883723 ']' 00:34:45.419 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 883723 00:34:45.420 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:45.420 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:45.420 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883723 00:34:45.678 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:45.678 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:45.678 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883723' 00:34:45.678 killing process with pid 883723 00:34:45.678 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 883723 00:34:45.678 18:55:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 883723 00:34:45.678 00:34:45.678 real 0m15.403s 00:34:45.678 user 0m31.051s 00:34:45.678 sys 0m4.260s 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:45.678 ************************************ 00:34:45.678 END TEST nvmf_digest_clean 00:34:45.678 ************************************ 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:45.678 ************************************ 00:34:45.678 START TEST nvmf_digest_error 00:34:45.678 ************************************ 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=885637 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 885637 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 885637 ']' 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:45.678 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:45.935 [2024-11-17 18:55:32.278267] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:45.935 [2024-11-17 18:55:32.278339] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.935 [2024-11-17 18:55:32.349763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.935 [2024-11-17 18:55:32.394126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.935 [2024-11-17 18:55:32.394180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.935 [2024-11-17 18:55:32.394193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.935 [2024-11-17 18:55:32.394219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.935 [2024-11-17 18:55:32.394228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.935 [2024-11-17 18:55:32.394779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.192 [2024-11-17 18:55:32.535504] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.192 null0 00:34:46.192 [2024-11-17 18:55:32.638480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:46.192 [2024-11-17 18:55:32.662703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=885669 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:46.192 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 885669 /var/tmp/bperf.sock 00:34:46.193 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 885669 ']' 00:34:46.193 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:46.193 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.193 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:46.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:46.193 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.193 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.193 [2024-11-17 18:55:32.711010] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:46.193 [2024-11-17 18:55:32.711086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885669 ] 00:34:46.450 [2024-11-17 18:55:32.779270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.450 [2024-11-17 18:55:32.824723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.450 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.450 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:46.450 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:46.450 18:55:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:46.708 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:46.708 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.708 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:46.708 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.708 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:46.708 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:47.274 nvme0n1 00:34:47.274 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:47.274 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.274 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:47.274 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.274 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:47.274 18:55:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:47.274 Running I/O for 2 seconds... 00:34:47.274 [2024-11-17 18:55:33.798730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.274 [2024-11-17 18:55:33.798787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.274 [2024-11-17 18:55:33.798807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.274 [2024-11-17 18:55:33.813460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.274 [2024-11-17 18:55:33.813488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.274 [2024-11-17 18:55:33.813519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.274 [2024-11-17 18:55:33.829190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.274 [2024-11-17 18:55:33.829219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.274 [2024-11-17 18:55:33.829249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.274 [2024-11-17 18:55:33.845700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.274 [2024-11-17 18:55:33.845732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.274 [2024-11-17 18:55:33.845751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:33.858305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:33.858332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:33.858364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:33.870952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:33.870983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:33.871001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:33.883951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:33.883992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:33.884008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:33.900416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:33.900447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:33.900465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:33.914640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:33.914671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:33.914698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:33.927076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:33.927117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:33.927132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:33.940189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:33.940220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:33.940237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:33.951436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:33.951463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:33.951495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:33.963991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:33.964018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:33.964047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:33.978617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:33.978646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:33.978670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:33.993926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:33.993957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:33.993974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:34.008825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:34.008856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:34.008874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:34.020203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:34.020231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:34.020263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:34.035716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:34.035746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:34.035778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:34.050721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:34.050766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:34.050783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:34.065703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:34.065756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:34.065786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:34.078336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:34.078364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:34.078395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:34.089887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:34.089915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:34.089947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.533 [2024-11-17 18:55:34.102475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.533 [2024-11-17 18:55:34.102509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.533 [2024-11-17 18:55:34.102541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.792 [2024-11-17 18:55:34.115070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.792 [2024-11-17 18:55:34.115098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.792 [2024-11-17 18:55:34.115130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.792 [2024-11-17 18:55:34.131387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.792 [2024-11-17 18:55:34.131415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.792 [2024-11-17 18:55:34.131447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.792 [2024-11-17 18:55:34.145786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.792 [2024-11-17 18:55:34.145815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.792 [2024-11-17 18:55:34.145832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.792 [2024-11-17 18:55:34.158381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.792 [2024-11-17 18:55:34.158411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.792 [2024-11-17 18:55:34.158428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.792 [2024-11-17 18:55:34.172096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.792 [2024-11-17 18:55:34.172126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.792 [2024-11-17 18:55:34.172143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.792 [2024-11-17 18:55:34.183419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.792 [2024-11-17 18:55:34.183447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.183479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.197280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.197306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.197338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.210043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.210070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.210107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.222558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.222585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.222616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.235154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.235182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.235214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.247703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.247732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.247763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.261005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.261049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.261065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.275478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.275507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.275524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.287644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.287697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.287715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.302570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.302597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.302628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.319030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.319060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.319077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.333383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.333421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.333440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.344703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.344756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.344773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:47.793 [2024-11-17 18:55:34.361115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:47.793 [2024-11-17 18:55:34.361144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:47.793 [2024-11-17 18:55:34.361176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.374915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.374947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.374964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.390257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.390288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.390305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.402011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.402040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.402056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.413917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.413946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.413964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.428314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.428342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.428373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.443459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.443490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.443507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.457095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.457140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.457157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.468112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.468141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.468171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.483986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.484016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.484034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.498882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.498912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.498929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.511017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.511072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.511089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.522215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.522244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.522276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.536803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.536833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.536865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.550845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.052 [2024-11-17 18:55:34.550875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.052 [2024-11-17 18:55:34.550893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.052 [2024-11-17 18:55:34.565653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.053 [2024-11-17 18:55:34.565691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.053 [2024-11-17 18:55:34.565719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.053 [2024-11-17 18:55:34.581728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.053 [2024-11-17 18:55:34.581760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.053 [2024-11-17 18:55:34.581778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.053 [2024-11-17 18:55:34.593357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.053 [2024-11-17 18:55:34.593403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.053 [2024-11-17 18:55:34.593420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.053 [2024-11-17 18:55:34.607346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.053 [2024-11-17 18:55:34.607374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.053 [2024-11-17 18:55:34.607406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.053 [2024-11-17 18:55:34.620229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.053 [2024-11-17 18:55:34.620260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.053 [2024-11-17 18:55:34.620277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.632867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.632897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.632915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.647327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.647355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.647386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.659824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.659854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.659872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.672305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.672350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.672368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.684972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.685024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.685041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.697535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.697565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.697598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.709557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.709600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.709617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.724379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.724407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.724439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.740232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.740260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.740292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.756510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.756555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.756572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.772332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.772360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.772391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 18581.00 IOPS, 72.58 MiB/s [2024-11-17T17:55:34.887Z] [2024-11-17 18:55:34.787132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.787161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.311 [2024-11-17 18:55:34.787193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.311 [2024-11-17 18:55:34.798486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.311 [2024-11-17 18:55:34.798514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.312 [2024-11-17 18:55:34.798552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.312 [2024-11-17 18:55:34.812160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.312 [2024-11-17 18:55:34.812203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.312 [2024-11-17 18:55:34.812220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.312 [2024-11-17 18:55:34.826644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.312 [2024-11-17 18:55:34.826693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.312 [2024-11-17 18:55:34.826711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.312 [2024-11-17 18:55:34.840689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.312 [2024-11-17 18:55:34.840720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.312 [2024-11-17 18:55:34.840737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.312 [2024-11-17 18:55:34.852538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.312 [2024-11-17 18:55:34.852567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.312 [2024-11-17 18:55:34.852600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.312 [2024-11-17 18:55:34.866758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.312 [2024-11-17 18:55:34.866789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.312 [2024-11-17 18:55:34.866825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.312 [2024-11-17 18:55:34.882239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.312 [2024-11-17 18:55:34.882267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.312 [2024-11-17 18:55:34.882299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.570 [2024-11-17 18:55:34.896163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.570 [2024-11-17 18:55:34.896192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.570 [2024-11-17 18:55:34.896208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.570 [2024-11-17 18:55:34.908347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.570 [2024-11-17 18:55:34.908375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.570 [2024-11-17 18:55:34.908406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.570 [2024-11-17 18:55:34.922811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.570 [2024-11-17 18:55:34.922861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.570 [2024-11-17 18:55:34.922878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.570 [2024-11-17 18:55:34.937431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.570 [2024-11-17 18:55:34.937462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.570 [2024-11-17 18:55:34.937479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.570 [2024-11-17 18:55:34.948820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.570 [2024-11-17 18:55:34.948848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.570 [2024-11-17 18:55:34.948880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.570 [2024-11-17 18:55:34.962070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.570 [2024-11-17 18:55:34.962100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.570 [2024-11-17 18:55:34.962117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.570 [2024-11-17 18:55:34.975690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.570 [2024-11-17 18:55:34.975720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.570 [2024-11-17 18:55:34.975751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.570 [2024-11-17 18:55:34.988143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.570 [2024-11-17 18:55:34.988173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.570 [2024-11-17 18:55:34.988205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.570 [2024-11-17 18:55:35.000389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.570 [2024-11-17 18:55:35.000418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.571 [2024-11-17 18:55:35.000435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.571 [2024-11-17 18:55:35.015114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.571 [2024-11-17 18:55:35.015144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.571 [2024-11-17 18:55:35.015161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.571 [2024-11-17 18:55:35.029100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.571 [2024-11-17 18:55:35.029129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.571 [2024-11-17 18:55:35.029145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.571 [2024-11-17 18:55:35.042514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.571 [2024-11-17 18:55:35.042559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.571 [2024-11-17 18:55:35.042577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.571 [2024-11-17 18:55:35.057903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.571 [2024-11-17 18:55:35.057934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.571 [2024-11-17 18:55:35.057951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.571 [2024-11-17 18:55:35.073123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.571 [2024-11-17 18:55:35.073153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.571 [2024-11-17 18:55:35.073170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.571 [2024-11-17 18:55:35.084721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.571 [2024-11-17 18:55:35.084750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.571 [2024-11-17 18:55:35.084782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.571 [2024-11-17 18:55:35.099808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.571 [2024-11-17 18:55:35.099839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.571 [2024-11-17 18:55:35.099857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.571 [2024-11-17 18:55:35.111505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.571 [2024-11-17 18:55:35.111536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.571 [2024-11-17 18:55:35.111553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.571 [2024-11-17 18:55:35.126360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.571 [2024-11-17 18:55:35.126390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.571 [2024-11-17 18:55:35.126408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.571 [2024-11-17 18:55:35.138643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.571 [2024-11-17 18:55:35.138693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.571 [2024-11-17 18:55:35.138710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.150188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.150216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.150253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.164462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.164492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.164510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.179009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.179051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.179067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.190059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.190086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.190117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.204543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.204571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.204601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.219697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.219727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.219746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.233049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.233079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.233096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.245396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.245423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.245455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.257626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.257669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.257695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.269907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.269935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.269968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.282598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.282626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.282657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.297774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.297804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.297821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.312219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.312249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.312267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.327865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.327895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.327913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.339326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.339352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.339382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.354727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.354772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.354789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.367854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.367897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.367912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.380066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.380097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.380120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:48.830 [2024-11-17 18:55:35.394866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:48.830 [2024-11-17 18:55:35.394896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:48.830 [2024-11-17 18:55:35.394914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.088 [2024-11-17 18:55:35.410204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.088 [2024-11-17 18:55:35.410234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.088 [2024-11-17 18:55:35.410252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.088 [2024-11-17 18:55:35.421437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.088 [2024-11-17 18:55:35.421464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.088 [2024-11-17 18:55:35.421494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.088 [2024-11-17 18:55:35.435179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.088 [2024-11-17 18:55:35.435209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.088 [2024-11-17 18:55:35.435225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.451346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.451375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.451391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.466276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.466304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.466335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.482146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.482174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.482190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.499794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.499825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.499843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.510020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.510053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.510085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.525336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.525365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.525396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.539452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.539482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.539499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.553720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.553751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.553768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.567663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.567702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.567720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.578406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.578450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.578466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.595125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.595153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.595186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.607808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.607840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.607857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.620289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.620317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.620348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.635117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.635143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.635174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.089 [2024-11-17 18:55:35.651304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.089 [2024-11-17 18:55:35.651334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.089 [2024-11-17 18:55:35.651351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.347 [2024-11-17 18:55:35.666250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.347 [2024-11-17 18:55:35.666281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.347 [2024-11-17 18:55:35.666299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.347 [2024-11-17 18:55:35.678580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.347 [2024-11-17 18:55:35.678609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.347 [2024-11-17 18:55:35.678640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.347 [2024-11-17 18:55:35.693003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.347 [2024-11-17 18:55:35.693034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.347 [2024-11-17 18:55:35.693052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.347 [2024-11-17 18:55:35.708281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.347 [2024-11-17 18:55:35.708308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.347 [2024-11-17 18:55:35.708338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.347 [2024-11-17 18:55:35.723619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.347 [2024-11-17 18:55:35.723646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.347 [2024-11-17 18:55:35.723683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.347 [2024-11-17 18:55:35.739739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.347 [2024-11-17 18:55:35.739766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.347 [2024-11-17 18:55:35.739797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.347 [2024-11-17 18:55:35.754948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.347 [2024-11-17 18:55:35.754990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.347 [2024-11-17 18:55:35.755012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.347 [2024-11-17 18:55:35.771137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.347 [2024-11-17 18:55:35.771166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.347 [2024-11-17 18:55:35.771182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.347 [2024-11-17 18:55:35.782049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180ee40) 00:34:49.347 [2024-11-17 18:55:35.782094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:49.347 [2024-11-17 18:55:35.782111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:49.347 18500.50 IOPS, 72.27 MiB/s 00:34:49.347 Latency(us) 00:34:49.347 [2024-11-17T17:55:35.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.348 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:49.348 nvme0n1 : 2.00 18520.74 72.35 0.00 0.00 6905.03 3495.25 22622.06 00:34:49.348 [2024-11-17T17:55:35.924Z] =================================================================================================================== 00:34:49.348 [2024-11-17T17:55:35.924Z] Total : 18520.74 72.35 0.00 0.00 6905.03 3495.25 22622.06 00:34:49.348 { 00:34:49.348 "results": [ 00:34:49.348 { 00:34:49.348 "job": "nvme0n1", 00:34:49.348 "core_mask": "0x2", 00:34:49.348 "workload": "randread", 00:34:49.348 "status": "finished", 00:34:49.348 "queue_depth": 128, 00:34:49.348 "io_size": 4096, 00:34:49.348 "runtime": 2.004726, 00:34:49.348 "iops": 18520.735502008753, 00:34:49.348 "mibps": 72.34662305472169, 00:34:49.348 "io_failed": 0, 00:34:49.348 "io_timeout": 0, 00:34:49.348 "avg_latency_us": 6905.02927630693, 00:34:49.348 "min_latency_us": 3495.2533333333336, 00:34:49.348 "max_latency_us": 22622.056296296298 00:34:49.348 } 00:34:49.348 ], 00:34:49.348 "core_count": 1 00:34:49.348 } 00:34:49.348 18:55:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:49.348 18:55:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:49.348 18:55:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:49.348 | .driver_specific 00:34:49.348 | .nvme_error 00:34:49.348 | .status_code 00:34:49.348 | .command_transient_transport_error' 00:34:49.348 18:55:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 885669 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 885669 ']' 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 885669 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885669 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885669' 00:34:49.606 killing process with pid 885669 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 885669 00:34:49.606 Received shutdown signal, test time was about 2.000000 seconds 00:34:49.606 00:34:49.606 Latency(us) 00:34:49.606 [2024-11-17T17:55:36.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.606 [2024-11-17T17:55:36.182Z] =================================================================================================================== 00:34:49.606 [2024-11-17T17:55:36.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:49.606 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 885669 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=886071 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 886071 /var/tmp/bperf.sock 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 886071 ']' 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:49.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:49.865 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:49.865 [2024-11-17 18:55:36.374901] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:49.865 [2024-11-17 18:55:36.375013] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886071 ] 00:34:49.865 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:49.865 Zero copy mechanism will not be used. 00:34:50.123 [2024-11-17 18:55:36.448118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.123 [2024-11-17 18:55:36.495283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.123 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:50.123 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:50.123 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:50.123 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:50.381 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:50.381 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.381 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.381 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.381 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:50.381 18:55:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:50.947 nvme0n1 00:34:50.947 18:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:50.947 18:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.947 18:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:50.947 18:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.947 18:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:50.947 18:55:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:50.947 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:50.947 Zero copy mechanism will not be used. 00:34:50.947 Running I/O for 2 seconds... 00:34:50.947 [2024-11-17 18:55:37.387961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.388034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.388055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.393473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.393509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.393527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.399664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.399708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.399737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.407405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.407437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.407455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.413266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.413297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.413315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.417648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.417687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.417707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.422606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.422637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.422655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.429661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.429701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.429720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.437568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.437597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.437629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.443941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.443989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.444008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.449848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.449880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.449899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.454135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.454166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.454184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.456917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.456946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.456964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.460500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.460530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.460553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.464642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.464671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.464698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.467382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.467411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.467428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.471250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.471281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.471299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.476087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.476117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.476135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.480283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.480314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.480332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.484273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.947 [2024-11-17 18:55:37.484304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.947 [2024-11-17 18:55:37.484322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:50.947 [2024-11-17 18:55:37.489900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.948 [2024-11-17 18:55:37.489931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.948 [2024-11-17 18:55:37.489949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:50.948 [2024-11-17 18:55:37.494681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.948 [2024-11-17 18:55:37.494712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.948 [2024-11-17 18:55:37.494730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:50.948 [2024-11-17 18:55:37.498062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.948 [2024-11-17 18:55:37.498112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.948 [2024-11-17 18:55:37.498131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:50.948 [2024-11-17 18:55:37.502967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.948 [2024-11-17 18:55:37.502999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.948 [2024-11-17 18:55:37.503017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:50.948 [2024-11-17 18:55:37.508171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.948 [2024-11-17 18:55:37.508200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.948 [2024-11-17 18:55:37.508217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:50.948 [2024-11-17 18:55:37.513402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.948 [2024-11-17 18:55:37.513432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.948 [2024-11-17 18:55:37.513465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:50.948 [2024-11-17 18:55:37.517987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:50.948 [2024-11-17 18:55:37.518018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:50.948 [2024-11-17 18:55:37.518035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.522711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.522742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.522760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.527923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.527954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.527972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.533475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.533506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.533524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.539461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.539492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.539510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.544652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.544689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.544708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.550632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.550663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.550691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.554059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.554090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.554124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.559296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.559341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.559357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.565539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.565570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.565603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.571462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.571491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.571508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.577652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.577703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.577722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.583031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.583077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.583094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.588153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.588184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.588223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.593148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.593193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.593211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.598504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.598537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.598555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.603172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.603205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.207 [2024-11-17 18:55:37.603223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.207 [2024-11-17 18:55:37.608531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.207 [2024-11-17 18:55:37.608563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.608581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.613630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.613682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.613702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.618189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.618219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.618236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.622837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.622869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.622886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.627531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.627575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.627591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.632433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.632483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.632501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.638112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.638157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.638175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.645777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.645808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.645826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.651661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.651715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.651733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.657274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.657304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.657337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.662640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.662679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.662701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.667212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.667256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.667273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.671810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.671840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.671858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.676220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.676249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.676272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.680197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.680226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.680244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.683092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.683121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.683139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.687131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.687162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.687179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.692246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.692277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.692295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.696438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.696469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.696488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.701902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.701933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.701951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.709817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.709849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.709867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.717867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.717899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.717917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.726142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.726185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.726205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.733828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.733877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.733895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.741658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.741699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.741718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.749372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.749403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.749420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.756993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.208 [2024-11-17 18:55:37.757045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.208 [2024-11-17 18:55:37.757064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.208 [2024-11-17 18:55:37.764491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.209 [2024-11-17 18:55:37.764523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.209 [2024-11-17 18:55:37.764557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.209 [2024-11-17 18:55:37.772172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.209 [2024-11-17 18:55:37.772219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.209 [2024-11-17 18:55:37.772236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.209 [2024-11-17 18:55:37.780005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.209 [2024-11-17 18:55:37.780038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.209 [2024-11-17 18:55:37.780056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.787616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.787649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.787668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.795167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.795199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.795218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.802836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.802870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.802889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.810446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.810495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.810513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.818103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.818149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.818166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.823720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.823753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.823771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.828729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.828760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.828778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.834177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.834209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.834229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.839971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.840003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.840035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.845216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.845248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.845271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.851038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.851070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.851088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.856644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.856683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.856719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.860441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.860481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.860499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.867152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.867182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.867199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.873556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.873588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.873620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.879441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.879471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.879488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.885268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.885298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.885330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.890766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.890797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.890815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.895410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.895447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.895466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.900073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.900103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.468 [2024-11-17 18:55:37.900120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.468 [2024-11-17 18:55:37.904729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.468 [2024-11-17 18:55:37.904775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.904792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.909698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.909728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.909746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.915893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.915923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.915941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.920545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.920576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.920594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.925650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.925688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.925708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.930227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.930278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.930296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.934910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.934941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.934958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.940430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.940461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.940478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.947306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.947337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.947354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.954423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.954456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.954474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.960577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.960610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.960628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.966425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.966458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.966476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.971690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.971730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.971747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.977264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.977296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.977314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.984026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.984057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.984075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.990405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.990437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.990461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:37.996512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:37.996544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:37.996562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:38.002572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:38.002604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:38.002622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:38.008261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:38.008292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:38.008310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:38.013983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:38.014015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:38.014033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:38.019653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:38.019694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:38.019714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:38.025898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:38.025930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:38.025947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:38.032585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:38.032616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:38.032634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.469 [2024-11-17 18:55:38.038792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.469 [2024-11-17 18:55:38.038823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.469 [2024-11-17 18:55:38.038841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.044452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.044484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.044503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.050503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.050550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.050567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.056431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.056461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.056493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.062276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.062307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.062325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.067537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.067568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.067586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.072594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.072625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.072643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.078020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.078052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.078070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.083835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.083867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.083886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.089765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.089796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.089822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.097269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.097301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.097319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.103536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.103569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.103587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.109257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.109288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.109306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.113180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.113211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.113229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.116576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.116606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.116624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.120628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.120658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.120684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.123596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.123626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.123643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.127613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.127644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.127662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.132116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.132152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.132171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.137065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.137097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.137115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.142095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.142127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.142145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.146704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.146734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.146752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.151266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.151297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.151314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.155956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.155986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.156003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.160664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.728 [2024-11-17 18:55:38.160703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.728 [2024-11-17 18:55:38.160721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.728 [2024-11-17 18:55:38.165256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.165286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.165304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.169841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.169871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.169888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.174385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.174415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.174432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.178943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.178974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.178991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.183612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.183643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.183660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.188324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.188355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.188373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.192860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.192889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.192906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.197425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.197455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.197472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.201917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.201946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.201963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.206769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.206800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.206818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.211747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.211778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.211802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.216400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.216431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.216449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.220902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.220936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.220953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.225499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.225529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.225546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.230017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.230047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.230065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.234719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.234749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.234766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.239290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.239320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.239337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.243858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.243888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.243906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.248473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.248502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.248519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.252943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.252973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.252990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.257542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.257572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.257589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.262199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.262229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.262247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.266721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.266751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.266768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.271393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.271424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.271441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.275927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.275957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.275974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.280514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.280544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.280561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.729 [2024-11-17 18:55:38.285112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.729 [2024-11-17 18:55:38.285142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.729 [2024-11-17 18:55:38.285159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.730 [2024-11-17 18:55:38.289732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.730 [2024-11-17 18:55:38.289762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.730 [2024-11-17 18:55:38.289789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.730 [2024-11-17 18:55:38.294386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.730 [2024-11-17 18:55:38.294417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.730 [2024-11-17 18:55:38.294434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.730 [2024-11-17 18:55:38.299079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.730 [2024-11-17 18:55:38.299108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.730 [2024-11-17 18:55:38.299125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.303695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.303725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.303742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.308399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.308428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.308445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.312923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.312952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.312969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.317564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.317593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.317610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.322143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.322173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.322190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.327000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.327031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.327048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.332142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.332178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.332197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.336842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.336873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.336891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.339830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.339861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.339879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.344866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.344896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.344914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.349694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.349724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.349742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.354835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.354865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.354882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.359775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.359806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.359823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.364820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.364851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.364869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.369876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.369906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.369924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.375046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.375077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.375095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.380688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.380719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.380736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.387010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.387055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.387074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.989 5838.00 IOPS, 729.75 MiB/s [2024-11-17T17:55:38.565Z] [2024-11-17 18:55:38.394387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.394419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.989 [2024-11-17 18:55:38.394436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.989 [2024-11-17 18:55:38.400233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.989 [2024-11-17 18:55:38.400265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.400302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.404009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.404056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.404073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.408506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.408539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.408556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.414390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.414421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.414438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.420932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.420979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.421003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.427757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.427793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.427811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.434392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.434423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.434441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.440961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.440992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.441010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.446323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.446355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.446373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.449627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.449657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.449681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.452819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.452849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.452867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.455779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.455809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.455826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.458763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.458791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.458809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.462575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.462605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.462623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.465935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.465967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.465984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.470294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.470325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.470342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.474469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.474500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.474517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.477511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.477542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.477559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.481464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.481494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.481512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.486232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.486277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.486295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.491140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.491171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.491190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.494252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.494282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.494306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.497900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.497929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.497947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.501559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.501588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.501605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.504308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.504338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.504355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.508206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.508235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.508253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.512159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.512188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.990 [2024-11-17 18:55:38.512205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.990 [2024-11-17 18:55:38.515217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.990 [2024-11-17 18:55:38.515245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.991 [2024-11-17 18:55:38.515262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.991 [2024-11-17 18:55:38.520432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.991 [2024-11-17 18:55:38.520464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.991 [2024-11-17 18:55:38.520482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.991 [2024-11-17 18:55:38.526069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.991 [2024-11-17 18:55:38.526101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.991 [2024-11-17 18:55:38.526120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.991 [2024-11-17 18:55:38.532708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.991 [2024-11-17 18:55:38.532745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.991 [2024-11-17 18:55:38.532778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.991 [2024-11-17 18:55:38.539211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.991 [2024-11-17 18:55:38.539242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.991 [2024-11-17 18:55:38.539261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.991 [2024-11-17 18:55:38.545940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.991 [2024-11-17 18:55:38.545972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.991 [2024-11-17 18:55:38.545990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.991 [2024-11-17 18:55:38.552571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.991 [2024-11-17 18:55:38.552618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.991 [2024-11-17 18:55:38.552635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.991 [2024-11-17 18:55:38.558479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:51.991 [2024-11-17 18:55:38.558511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.991 [2024-11-17 18:55:38.558529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.564935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.564967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.564986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.570499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.570531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.570548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.576417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.576449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.576466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.582274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.582307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.582325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.588279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.588310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.588328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.594501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.594534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.594552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.600741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.600773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.600791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.607944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.607976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.607994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.614457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.614489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.614507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.620586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.620617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.620635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.626587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.626618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.626637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.632789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.632822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.632840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.638804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.638851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.638875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.645143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.645175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.645193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.652307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.652339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.652357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.659074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.659106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.659124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.665389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.665421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.665439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.250 [2024-11-17 18:55:38.671483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.250 [2024-11-17 18:55:38.671514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.250 [2024-11-17 18:55:38.671532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.677864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.677897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.677915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.684143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.684189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.684206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.689758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.689790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.689808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.694788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.694826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.694845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.699968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.700000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.700017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.704114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.704145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.704163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.707917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.707948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.707965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.712471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.712500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.712518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.716931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.716960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.716992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.721561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.721590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.721607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.726568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.726597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.726615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.732125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.732156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.732173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.737372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.737418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.737436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.742547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.742580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.742598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.747229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.747274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.747291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.751901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.751932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.751949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.756696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.756726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.756744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.761240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.761284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.761302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.765834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.765864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.765881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.770446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.770476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.770493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.775166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.775202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.775221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.780620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.780651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.780669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.785317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.785347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.785363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.791440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.791472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.791490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.796561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.796592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.796609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.801843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.801874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.251 [2024-11-17 18:55:38.801893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.251 [2024-11-17 18:55:38.807881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.251 [2024-11-17 18:55:38.807913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.252 [2024-11-17 18:55:38.807930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.252 [2024-11-17 18:55:38.815651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.252 [2024-11-17 18:55:38.815693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.252 [2024-11-17 18:55:38.815714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.252 [2024-11-17 18:55:38.822598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.252 [2024-11-17 18:55:38.822630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.252 [2024-11-17 18:55:38.822648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.828369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.828402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.828435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.831803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.831848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.831865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.838139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.838170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.838188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.842875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.842905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.842938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.847429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.847457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.847488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.852005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.852034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.852065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.856544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.856573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.856605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.861193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.861221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.861252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.865734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.865763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.865796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.870464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.870509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.870527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.875107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.875137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.875155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.879885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.879916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.879933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.884926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.884957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.884975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.890232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.890263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.890281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.894893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.894922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.894940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.899612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.899642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.899684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.904305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.904353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.511 [2024-11-17 18:55:38.904375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.511 [2024-11-17 18:55:38.909785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.511 [2024-11-17 18:55:38.909822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.909840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.915012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.915043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.915060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.919772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.919803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.919820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.925511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.925540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.925571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.931019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.931065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.931082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.937481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.937512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.937530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.942199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.942230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.942248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.946963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.947007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.947024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.952427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.952458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.952476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.957722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.957754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.957772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.963773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.963805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.963823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.971423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.971455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.971474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.977578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.977609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.977628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.983547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.983578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.983597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.989121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.989153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.989172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:38.994972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:38.995004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:38.995022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.001741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:39.001773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:39.001791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.007528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:39.007559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:39.007584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.013013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:39.013044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:39.013062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.018308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:39.018341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:39.018359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.023184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:39.023216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:39.023233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.027897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:39.027926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:39.027943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.032578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:39.032607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:39.032626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.037075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:39.037105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:39.037121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.041769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:39.041799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:39.041816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.046756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:39.046787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:39.046804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.052115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.512 [2024-11-17 18:55:39.052148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.512 [2024-11-17 18:55:39.052166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.512 [2024-11-17 18:55:39.057344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.513 [2024-11-17 18:55:39.057376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.513 [2024-11-17 18:55:39.057410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.513 [2024-11-17 18:55:39.062808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.513 [2024-11-17 18:55:39.062841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.513 [2024-11-17 18:55:39.062859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.513 [2024-11-17 18:55:39.068301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.513 [2024-11-17 18:55:39.068333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.513 [2024-11-17 18:55:39.068351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.513 [2024-11-17 18:55:39.074936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.513 [2024-11-17 18:55:39.074968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.513 [2024-11-17 18:55:39.074985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.513 [2024-11-17 18:55:39.081525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.513 [2024-11-17 18:55:39.081558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.513 [2024-11-17 18:55:39.081576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.772 [2024-11-17 18:55:39.088159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.772 [2024-11-17 18:55:39.088191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.772 [2024-11-17 18:55:39.088208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.772 [2024-11-17 18:55:39.094945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.772 [2024-11-17 18:55:39.094992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.772 [2024-11-17 18:55:39.095009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.772 [2024-11-17 18:55:39.101409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.772 [2024-11-17 18:55:39.101441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.772 [2024-11-17 18:55:39.101465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.772 [2024-11-17 18:55:39.105055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.772 [2024-11-17 18:55:39.105102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.772 [2024-11-17 18:55:39.105120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.772 [2024-11-17 18:55:39.111861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.772 [2024-11-17 18:55:39.111893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.772 [2024-11-17 18:55:39.111911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.772 [2024-11-17 18:55:39.118511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.772 [2024-11-17 18:55:39.118556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.772 [2024-11-17 18:55:39.118573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.772 [2024-11-17 18:55:39.124746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.772 [2024-11-17 18:55:39.124777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.772 [2024-11-17 18:55:39.124795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.772 [2024-11-17 18:55:39.131639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.772 [2024-11-17 18:55:39.131672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.772 [2024-11-17 18:55:39.131701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.772 [2024-11-17 18:55:39.137761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.772 [2024-11-17 18:55:39.137792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.772 [2024-11-17 18:55:39.137809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.772 [2024-11-17 18:55:39.143129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.772 [2024-11-17 18:55:39.143160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.772 [2024-11-17 18:55:39.143194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.772 [2024-11-17 18:55:39.148775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.772 [2024-11-17 18:55:39.148806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.772 [2024-11-17 18:55:39.148825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.153830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.153882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.153899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.159198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.159230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.159248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.165408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.165459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.165478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.171115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.171162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.171180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.176487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.176518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.176536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.181854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.181884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.181902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.187720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.187751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.187769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.194706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.194738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.194755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.202260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.202292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.202311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.208220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.208251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.208269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.213921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.213953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.213970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.218670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.218707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.218725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.223372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.223417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.223435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.228135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.228164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.228197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.232612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.232642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.232660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.236908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.236937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.236954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.241760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.241790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.241807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.247284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.247315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.247339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.253307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.253339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.253357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.260232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.260264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.260283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.265823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.265882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.265901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.271194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.271225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.271244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.276478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.276508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.276526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.281968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.282015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.282034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.287085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.287117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.287136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.292159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.292189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.292207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.773 [2024-11-17 18:55:39.297548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.773 [2024-11-17 18:55:39.297586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.773 [2024-11-17 18:55:39.297605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.774 [2024-11-17 18:55:39.302568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.774 [2024-11-17 18:55:39.302599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.774 [2024-11-17 18:55:39.302617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.774 [2024-11-17 18:55:39.307923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.774 [2024-11-17 18:55:39.307954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.774 [2024-11-17 18:55:39.307972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.774 [2024-11-17 18:55:39.313168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.774 [2024-11-17 18:55:39.313200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.774 [2024-11-17 18:55:39.313217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.774 [2024-11-17 18:55:39.316938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.774 [2024-11-17 18:55:39.316973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.774 [2024-11-17 18:55:39.316991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.774 [2024-11-17 18:55:39.320932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.774 [2024-11-17 18:55:39.320963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.774 [2024-11-17 18:55:39.320981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.774 [2024-11-17 18:55:39.325642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.774 [2024-11-17 18:55:39.325681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.774 [2024-11-17 18:55:39.325701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:52.774 [2024-11-17 18:55:39.331146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.774 [2024-11-17 18:55:39.331177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.774 [2024-11-17 18:55:39.331195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:52.774 [2024-11-17 18:55:39.335861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.774 [2024-11-17 18:55:39.335892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.774 [2024-11-17 18:55:39.335916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:52.774 [2024-11-17 18:55:39.340756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.774 [2024-11-17 18:55:39.340786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.774 [2024-11-17 18:55:39.340803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:52.774 [2024-11-17 18:55:39.345352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:52.774 [2024-11-17 18:55:39.345382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:52.774 [2024-11-17 18:55:39.345402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.032 [2024-11-17 18:55:39.350102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:53.032 [2024-11-17 18:55:39.350133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.032 [2024-11-17 18:55:39.350150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.032 [2024-11-17 18:55:39.354884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:53.032 [2024-11-17 18:55:39.354914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.032 [2024-11-17 18:55:39.354932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.032 [2024-11-17 18:55:39.359545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:53.032 [2024-11-17 18:55:39.359575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.032 [2024-11-17 18:55:39.359593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.032 [2024-11-17 18:55:39.365602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:53.032 [2024-11-17 18:55:39.365633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.032 [2024-11-17 18:55:39.365651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.032 [2024-11-17 18:55:39.370806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:53.032 [2024-11-17 18:55:39.370837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.032 [2024-11-17 18:55:39.370855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.032 [2024-11-17 18:55:39.377158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:53.032 [2024-11-17 18:55:39.377190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.032 [2024-11-17 18:55:39.377208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.032 [2024-11-17 18:55:39.384830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:53.032 [2024-11-17 18:55:39.384868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.032 [2024-11-17 18:55:39.384887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.032 5824.50 IOPS, 728.06 MiB/s [2024-11-17T17:55:39.608Z] [2024-11-17 18:55:39.393736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23686a0) 00:34:53.032 [2024-11-17 18:55:39.393767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.032 [2024-11-17 18:55:39.393786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.032 00:34:53.032 Latency(us) 00:34:53.032 [2024-11-17T17:55:39.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.032 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:53.032 nvme0n1 : 2.00 5819.37 727.42 0.00 0.00 2744.42 649.29 8204.14 00:34:53.032 [2024-11-17T17:55:39.608Z] =================================================================================================================== 00:34:53.032 [2024-11-17T17:55:39.608Z] Total : 5819.37 727.42 0.00 0.00 2744.42 649.29 8204.14 00:34:53.032 { 00:34:53.032 "results": [ 00:34:53.032 { 00:34:53.032 "job": "nvme0n1", 00:34:53.032 "core_mask": "0x2", 00:34:53.032 "workload": "randread", 00:34:53.032 "status": "finished", 00:34:53.032 "queue_depth": 16, 00:34:53.032 "io_size": 131072, 00:34:53.032 "runtime": 2.004512, 00:34:53.032 "iops": 5819.371497900736, 00:34:53.032 "mibps": 727.421437237592, 00:34:53.032 "io_failed": 0, 00:34:53.032 "io_timeout": 0, 00:34:53.032 "avg_latency_us": 2744.417797844136, 00:34:53.032 "min_latency_us": 649.2918518518519, 00:34:53.032 "max_latency_us": 8204.136296296296 00:34:53.032 } 00:34:53.032 ], 00:34:53.032 "core_count": 1 00:34:53.032 } 00:34:53.032 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:53.032 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:53.032 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:53.032 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:53.032 | .driver_specific 00:34:53.032 | .nvme_error 00:34:53.032 | .status_code 00:34:53.032 | .command_transient_transport_error' 00:34:53.289 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 377 > 0 )) 00:34:53.289 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 886071 00:34:53.289 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 886071 ']' 00:34:53.289 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 886071 00:34:53.289 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:53.290 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:53.290 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 886071 00:34:53.290 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:53.290 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:53.290 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 886071' 00:34:53.290 killing process with pid 886071 00:34:53.290 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 886071 00:34:53.290 Received shutdown signal, test time was about 2.000000 seconds 00:34:53.290 00:34:53.290 Latency(us) 00:34:53.290 [2024-11-17T17:55:39.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.290 [2024-11-17T17:55:39.866Z] =================================================================================================================== 00:34:53.290 [2024-11-17T17:55:39.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:53.290 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 886071 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=886480 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 886480 /var/tmp/bperf.sock 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 886480 ']' 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:53.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.548 18:55:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:53.548 [2024-11-17 18:55:39.971262] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:53.548 [2024-11-17 18:55:39.971346] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886480 ] 00:34:53.548 [2024-11-17 18:55:40.041489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.548 [2024-11-17 18:55:40.089419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.806 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.807 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:53.807 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:53.807 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:54.064 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:54.064 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.064 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.064 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.064 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.064 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:54.630 nvme0n1 00:34:54.630 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:54.630 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.630 18:55:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:54.630 18:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.630 18:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:54.630 18:55:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:54.630 Running I/O for 2 seconds... 00:34:54.630 [2024-11-17 18:55:41.147169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f3e60 00:34:54.630 [2024-11-17 18:55:41.148416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.630 [2024-11-17 18:55:41.148473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:54.630 [2024-11-17 18:55:41.159585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f6cc8 00:34:54.630 [2024-11-17 18:55:41.160780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.630 [2024-11-17 18:55:41.160811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:54.630 [2024-11-17 18:55:41.174314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eea00 00:34:54.630 [2024-11-17 18:55:41.176365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.630 [2024-11-17 18:55:41.176411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:54.630 [2024-11-17 18:55:41.182926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166fdeb0 00:34:54.630 [2024-11-17 18:55:41.183793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.630 [2024-11-17 18:55:41.183837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:54.630 [2024-11-17 18:55:41.198113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f4298 00:34:54.630 [2024-11-17 18:55:41.200044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.630 [2024-11-17 18:55:41.200074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.206560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e1710 00:34:54.889 [2024-11-17 18:55:41.207562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.207592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.219000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e4140 00:34:54.889 [2024-11-17 18:55:41.219710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.219741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.232721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166ea680 00:34:54.889 [2024-11-17 18:55:41.234158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.234202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.244482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e88f8 00:34:54.889 [2024-11-17 18:55:41.246129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.246174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.255184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166fef90 00:34:54.889 [2024-11-17 18:55:41.256927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.256957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.265444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f6020 00:34:54.889 [2024-11-17 18:55:41.266321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.266351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.279813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e6738 00:34:54.889 [2024-11-17 18:55:41.281283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.281313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.291260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f3a28 00:34:54.889 [2024-11-17 18:55:41.292414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.292443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.303008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e6fa8 00:34:54.889 [2024-11-17 18:55:41.304068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.304113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.316704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166fe720 00:34:54.889 [2024-11-17 18:55:41.318289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.318334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.327346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e12d8 00:34:54.889 [2024-11-17 18:55:41.329114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.329143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.339483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166de470 00:34:54.889 [2024-11-17 18:55:41.340970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.340998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.351387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166ec408 00:34:54.889 [2024-11-17 18:55:41.352626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.352670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.362771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f8618 00:34:54.889 [2024-11-17 18:55:41.363930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.363972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.374906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166de470 00:34:54.889 [2024-11-17 18:55:41.376118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.376163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.387039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166fa7d8 00:34:54.889 [2024-11-17 18:55:41.387932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.387962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.399461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166fe2e8 00:34:54.889 [2024-11-17 18:55:41.400564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.400593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.410608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f4298 00:34:54.889 [2024-11-17 18:55:41.411583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.411612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.422685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e12d8 00:34:54.889 [2024-11-17 18:55:41.423860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.423904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:54.889 [2024-11-17 18:55:41.434038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f4b08 00:34:54.889 [2024-11-17 18:55:41.435143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.889 [2024-11-17 18:55:41.435186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:54.890 [2024-11-17 18:55:41.445720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f81e0 00:34:54.890 [2024-11-17 18:55:41.446852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.890 [2024-11-17 18:55:41.446895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:54.890 [2024-11-17 18:55:41.459836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f2510 00:34:54.890 [2024-11-17 18:55:41.461772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:54.890 [2024-11-17 18:55:41.461815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.468509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e23b8 00:34:55.149 [2024-11-17 18:55:41.469424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.469467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.481893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166ee190 00:34:55.149 [2024-11-17 18:55:41.483149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.483179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.495884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.149 [2024-11-17 18:55:41.497665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.497715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.504244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eea00 00:34:55.149 [2024-11-17 18:55:41.505255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.505298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.516348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e1b48 00:34:55.149 [2024-11-17 18:55:41.517358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.517401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.531028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f6890 00:34:55.149 [2024-11-17 18:55:41.532705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.532757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.543372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e7c50 00:34:55.149 [2024-11-17 18:55:41.545247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.545290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.551713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166df118 00:34:55.149 [2024-11-17 18:55:41.552643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.552691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.563706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f1868 00:34:55.149 [2024-11-17 18:55:41.564724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.564753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.578290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e84c0 00:34:55.149 [2024-11-17 18:55:41.579997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.580041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.590370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f4f40 00:34:55.149 [2024-11-17 18:55:41.592180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.592224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.599024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.149 [2024-11-17 18:55:41.600046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.600089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.611462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f31b8 00:34:55.149 [2024-11-17 18:55:41.612672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.612733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.623516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f81e0 00:34:55.149 [2024-11-17 18:55:41.624208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.624237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.637153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eb760 00:34:55.149 [2024-11-17 18:55:41.638653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.638703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.648377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166df550 00:34:55.149 [2024-11-17 18:55:41.649689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.649742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.661909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e3060 00:34:55.149 [2024-11-17 18:55:41.663875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.663905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.670297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166de470 00:34:55.149 [2024-11-17 18:55:41.671324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.671367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.684723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f6458 00:34:55.149 [2024-11-17 18:55:41.686435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.686464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.693239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166fb048 00:34:55.149 [2024-11-17 18:55:41.694085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.694114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.707547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166ff3c8 00:34:55.149 [2024-11-17 18:55:41.708971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.709014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:55.149 [2024-11-17 18:55:41.718528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f7538 00:34:55.149 [2024-11-17 18:55:41.719741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.149 [2024-11-17 18:55:41.719770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.730531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166ebb98 00:34:55.408 [2024-11-17 18:55:41.731606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.731649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.742930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166ec408 00:34:55.408 [2024-11-17 18:55:41.743795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.743824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.754517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166fb8b8 00:34:55.408 [2024-11-17 18:55:41.755726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.755754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.766135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e95a0 00:34:55.408 [2024-11-17 18:55:41.767295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.767339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.778088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e6300 00:34:55.408 [2024-11-17 18:55:41.778851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.778880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.789532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f4b08 00:34:55.408 [2024-11-17 18:55:41.790686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.790715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.801263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e6b70 00:34:55.408 [2024-11-17 18:55:41.802236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.802279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.813432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e5a90 00:34:55.408 [2024-11-17 18:55:41.814372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.814414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.824549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e4140 00:34:55.408 [2024-11-17 18:55:41.825548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.825591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.838917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166feb58 00:34:55.408 [2024-11-17 18:55:41.840364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.840415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.850217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e5658 00:34:55.408 [2024-11-17 18:55:41.851595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.851639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.862336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f6020 00:34:55.408 [2024-11-17 18:55:41.863700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.863743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.873109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166fe720 00:34:55.408 [2024-11-17 18:55:41.874190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.408 [2024-11-17 18:55:41.874219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:55.408 [2024-11-17 18:55:41.884724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eee38 00:34:55.409 [2024-11-17 18:55:41.885810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.409 [2024-11-17 18:55:41.885853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:55.409 [2024-11-17 18:55:41.896625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e12d8 00:34:55.409 [2024-11-17 18:55:41.897308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.409 [2024-11-17 18:55:41.897338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:55.409 [2024-11-17 18:55:41.910196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eaab8 00:34:55.409 [2024-11-17 18:55:41.911688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.409 [2024-11-17 18:55:41.911719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:55.409 [2024-11-17 18:55:41.921987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166ea248 00:34:55.409 [2024-11-17 18:55:41.923572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.409 [2024-11-17 18:55:41.923615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:55.409 [2024-11-17 18:55:41.934018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e6738 00:34:55.409 [2024-11-17 18:55:41.935593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.409 [2024-11-17 18:55:41.935636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:55.409 [2024-11-17 18:55:41.944555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f1868 00:34:55.409 [2024-11-17 18:55:41.946135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.409 [2024-11-17 18:55:41.946164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:55.409 [2024-11-17 18:55:41.954644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eaab8 00:34:55.409 [2024-11-17 18:55:41.955476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.409 [2024-11-17 18:55:41.955519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:55.409 [2024-11-17 18:55:41.967035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f6cc8 00:34:55.409 [2024-11-17 18:55:41.967966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.409 [2024-11-17 18:55:41.968009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:55.409 [2024-11-17 18:55:41.979569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.409 [2024-11-17 18:55:41.980732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.409 [2024-11-17 18:55:41.980776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:55.667 [2024-11-17 18:55:41.991921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166f3a28 00:34:55.667 [2024-11-17 18:55:41.992986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.667 [2024-11-17 18:55:41.993013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:55.667 [2024-11-17 18:55:42.003450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166e1710 00:34:55.667 [2024-11-17 18:55:42.004509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.667 [2024-11-17 18:55:42.004550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:55.667 [2024-11-17 18:55:42.015290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166ebfd0 00:34:55.667 [2024-11-17 18:55:42.015955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.667 [2024-11-17 18:55:42.016000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:55.667 [2024-11-17 18:55:42.029413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166ff3c8 00:34:55.667 [2024-11-17 18:55:42.031080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.667 [2024-11-17 18:55:42.031123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:55.667 [2024-11-17 18:55:42.040290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.667 [2024-11-17 18:55:42.040528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.667 [2024-11-17 18:55:42.040571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.667 [2024-11-17 18:55:42.054131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.667 [2024-11-17 18:55:42.054377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.667 [2024-11-17 18:55:42.054404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.667 [2024-11-17 18:55:42.068175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.667 [2024-11-17 18:55:42.068484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.667 [2024-11-17 18:55:42.068525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.667 [2024-11-17 18:55:42.081904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.667 [2024-11-17 18:55:42.082143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.667 [2024-11-17 18:55:42.082169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.668 [2024-11-17 18:55:42.095896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.668 [2024-11-17 18:55:42.096128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.668 [2024-11-17 18:55:42.096154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.668 [2024-11-17 18:55:42.109825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.668 [2024-11-17 18:55:42.110052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.668 [2024-11-17 18:55:42.110078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.668 [2024-11-17 18:55:42.123850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.668 [2024-11-17 18:55:42.124078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.668 [2024-11-17 18:55:42.124105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.668 21034.00 IOPS, 82.16 MiB/s [2024-11-17T17:55:42.244Z] [2024-11-17 18:55:42.137837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.668 [2024-11-17 18:55:42.138165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.668 [2024-11-17 18:55:42.138194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.668 [2024-11-17 18:55:42.151802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.668 [2024-11-17 18:55:42.152051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.668 [2024-11-17 18:55:42.152077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.668 [2024-11-17 18:55:42.165452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.668 [2024-11-17 18:55:42.165735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.668 [2024-11-17 18:55:42.165768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.668 [2024-11-17 18:55:42.179427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.668 [2024-11-17 18:55:42.179709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.668 [2024-11-17 18:55:42.179747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.668 [2024-11-17 18:55:42.193529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.668 [2024-11-17 18:55:42.193762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.668 [2024-11-17 18:55:42.193790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.668 [2024-11-17 18:55:42.207499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.668 [2024-11-17 18:55:42.207778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.668 [2024-11-17 18:55:42.207822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.668 [2024-11-17 18:55:42.221486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.668 [2024-11-17 18:55:42.221681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.668 [2024-11-17 18:55:42.221723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.668 [2024-11-17 18:55:42.235362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.668 [2024-11-17 18:55:42.235637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.668 [2024-11-17 18:55:42.235684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.249233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.249485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.249512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.263171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.263449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.263491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.277106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.277343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.277369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.291035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.291275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.291302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.304867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.305096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.305123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.318991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.319234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.319259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.332893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.333122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.333148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.346857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.347084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.347110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.360936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.361184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.361210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.374808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.375020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.375060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.388866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.389112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.389139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.402858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.403111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.403137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.416863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.417134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.417162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.430986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.431238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.431265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.445033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.445298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.445325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.926 [2024-11-17 18:55:42.459044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.926 [2024-11-17 18:55:42.459282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.926 [2024-11-17 18:55:42.459324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.927 [2024-11-17 18:55:42.473126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.927 [2024-11-17 18:55:42.473385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.927 [2024-11-17 18:55:42.473427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.927 [2024-11-17 18:55:42.487170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.927 [2024-11-17 18:55:42.487412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.927 [2024-11-17 18:55:42.487454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:55.927 [2024-11-17 18:55:42.500918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:55.927 [2024-11-17 18:55:42.501140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:55.927 [2024-11-17 18:55:42.501181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.184 [2024-11-17 18:55:42.514917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.184 [2024-11-17 18:55:42.515197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.184 [2024-11-17 18:55:42.515240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.184 [2024-11-17 18:55:42.529002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.184 [2024-11-17 18:55:42.529282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.184 [2024-11-17 18:55:42.529328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.184 [2024-11-17 18:55:42.543108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.543339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.543365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.556895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.557129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.557155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.570763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.570973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.570999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.584563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.584761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.584803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.598502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.598760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.598787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.612535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.612766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.612792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.626598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.626802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.626843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.640701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.640945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.640971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.654546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.654778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.654804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.668545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.668780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.668808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.682425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.682731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.682760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.696549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.696778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.696805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.710437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.710721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.710763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.724365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.724641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.724692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.738231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.738569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.738598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.185 [2024-11-17 18:55:42.752370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.185 [2024-11-17 18:55:42.752651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.185 [2024-11-17 18:55:42.752701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.766163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.766447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.766491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.780123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.780408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.780437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.794124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.794417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.794446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.807921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.808214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.808257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.821897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.822181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.822223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.835860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.836142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.836186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.849911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.850178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.850204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.863887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.864244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.864287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.877950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.878221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.878248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.891910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.892195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.892227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.905895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.906186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.906213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.919970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.920247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.920275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.443 [2024-11-17 18:55:42.934090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.443 [2024-11-17 18:55:42.934380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.443 [2024-11-17 18:55:42.934422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.444 [2024-11-17 18:55:42.948191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.444 [2024-11-17 18:55:42.948457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.444 [2024-11-17 18:55:42.948484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.444 [2024-11-17 18:55:42.962183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.444 [2024-11-17 18:55:42.962475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.444 [2024-11-17 18:55:42.962501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.444 [2024-11-17 18:55:42.976188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.444 [2024-11-17 18:55:42.976470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.444 [2024-11-17 18:55:42.976496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.444 [2024-11-17 18:55:42.990192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.444 [2024-11-17 18:55:42.990477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.444 [2024-11-17 18:55:42.990505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.444 [2024-11-17 18:55:43.004279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.444 [2024-11-17 18:55:43.004562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.444 [2024-11-17 18:55:43.004605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.444 [2024-11-17 18:55:43.018208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.444 [2024-11-17 18:55:43.018495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.444 [2024-11-17 18:55:43.018523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.702 [2024-11-17 18:55:43.032202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.702 [2024-11-17 18:55:43.032498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.702 [2024-11-17 18:55:43.032542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.702 [2024-11-17 18:55:43.046298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.702 [2024-11-17 18:55:43.046608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.702 [2024-11-17 18:55:43.046634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.702 [2024-11-17 18:55:43.060321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.702 [2024-11-17 18:55:43.060608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.702 [2024-11-17 18:55:43.060651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.702 [2024-11-17 18:55:43.074421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.702 [2024-11-17 18:55:43.074736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.702 [2024-11-17 18:55:43.074764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.702 [2024-11-17 18:55:43.088540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.702 [2024-11-17 18:55:43.088812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.702 [2024-11-17 18:55:43.088840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.702 [2024-11-17 18:55:43.102429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.702 [2024-11-17 18:55:43.102711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.702 [2024-11-17 18:55:43.102753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.702 [2024-11-17 18:55:43.116520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.702 [2024-11-17 18:55:43.116793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.702 [2024-11-17 18:55:43.116837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.702 [2024-11-17 18:55:43.130390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc52c0) with pdu=0x2000166eff18 00:34:56.702 [2024-11-17 18:55:43.130707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.702 [2024-11-17 18:55:43.130736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.702 19657.00 IOPS, 76.79 MiB/s 00:34:56.702 Latency(us) 00:34:56.702 [2024-11-17T17:55:43.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.702 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:56.702 nvme0n1 : 2.01 19656.01 76.78 0.00 0.00 6497.13 2657.85 16117.00 00:34:56.702 [2024-11-17T17:55:43.278Z] =================================================================================================================== 00:34:56.702 [2024-11-17T17:55:43.279Z] Total : 19656.01 76.78 0.00 0.00 6497.13 2657.85 16117.00 00:34:56.703 { 00:34:56.703 "results": [ 00:34:56.703 { 00:34:56.703 "job": "nvme0n1", 00:34:56.703 "core_mask": "0x2", 00:34:56.703 "workload": "randwrite", 00:34:56.703 "status": "finished", 00:34:56.703 "queue_depth": 128, 00:34:56.703 "io_size": 4096, 00:34:56.703 "runtime": 2.008241, 00:34:56.703 "iops": 19656.00742142004, 00:34:56.703 "mibps": 76.78127898992203, 00:34:56.703 "io_failed": 0, 00:34:56.703 "io_timeout": 0, 00:34:56.703 "avg_latency_us": 6497.126899318633, 00:34:56.703 "min_latency_us": 2657.8488888888887, 00:34:56.703 "max_latency_us": 16117.001481481482 00:34:56.703 } 00:34:56.703 ], 00:34:56.703 "core_count": 1 00:34:56.703 } 00:34:56.703 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:56.703 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:56.703 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:56.703 | .driver_specific 00:34:56.703 | .nvme_error 00:34:56.703 | .status_code 00:34:56.703 | .command_transient_transport_error' 00:34:56.703 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 154 > 0 )) 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 886480 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 886480 ']' 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 886480 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 886480 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 886480' 00:34:56.961 killing process with pid 886480 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 886480 00:34:56.961 Received shutdown signal, test time was about 2.000000 seconds 00:34:56.961 00:34:56.961 Latency(us) 00:34:56.961 [2024-11-17T17:55:43.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.961 [2024-11-17T17:55:43.537Z] =================================================================================================================== 00:34:56.961 [2024-11-17T17:55:43.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:56.961 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 886480 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=886949 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 886949 /var/tmp/bperf.sock 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 886949 ']' 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:57.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:57.219 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:57.219 [2024-11-17 18:55:43.686101] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:34:57.219 [2024-11-17 18:55:43.686198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886949 ] 00:34:57.219 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:57.219 Zero copy mechanism will not be used. 00:34:57.219 [2024-11-17 18:55:43.753804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.478 [2024-11-17 18:55:43.799996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.478 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:57.478 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:57.478 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:57.478 18:55:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:57.736 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:57.736 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.736 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:57.736 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.736 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:57.736 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:58.301 nvme0n1 00:34:58.302 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:58.302 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.302 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.302 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.302 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:58.302 18:55:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:58.302 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:58.302 Zero copy mechanism will not be used. 00:34:58.302 Running I/O for 2 seconds... 00:34:58.302 [2024-11-17 18:55:44.821636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.302 [2024-11-17 18:55:44.821796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.302 [2024-11-17 18:55:44.821831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.302 [2024-11-17 18:55:44.828443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.302 [2024-11-17 18:55:44.828573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.302 [2024-11-17 18:55:44.828604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.302 [2024-11-17 18:55:44.834797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.302 [2024-11-17 18:55:44.834951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.302 [2024-11-17 18:55:44.834981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.302 [2024-11-17 18:55:44.840866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.302 [2024-11-17 18:55:44.840953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.302 [2024-11-17 18:55:44.840981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.302 [2024-11-17 18:55:44.847084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.302 [2024-11-17 18:55:44.847182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.302 [2024-11-17 18:55:44.847211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.302 [2024-11-17 18:55:44.854098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.302 [2024-11-17 18:55:44.854201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.302 [2024-11-17 18:55:44.854231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.302 [2024-11-17 18:55:44.859807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.302 [2024-11-17 18:55:44.859889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.302 [2024-11-17 18:55:44.859916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.302 [2024-11-17 18:55:44.865031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.302 [2024-11-17 18:55:44.865145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.302 [2024-11-17 18:55:44.865181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.302 [2024-11-17 18:55:44.871009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.302 [2024-11-17 18:55:44.871081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.302 [2024-11-17 18:55:44.871109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.302 [2024-11-17 18:55:44.877387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.877458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.877487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.882641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.882733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.882766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.888001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.888088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.888115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.893093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.893170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.893197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.898199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.898297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.898324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.903788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.903901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.903930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.909119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.909224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.909252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.914239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.914390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.914420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.919504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.919648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.919684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.924920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.925040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.925069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.929981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.930075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.930103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.935135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.935233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.935262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.940571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.940703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.940732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.947614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.947753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.947782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.953416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.953517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.953545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.958574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.958690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.958719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.963521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.963607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.963634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.968569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.968666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.968701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.973749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.973896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.973924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.980318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.980438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.980466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.986233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.986323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.986350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.993493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.993712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.993741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:44.999757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:44.999836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:44.999863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:45.005466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:45.005581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:45.005609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:45.011177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:45.011309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:45.011343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.561 [2024-11-17 18:55:45.016568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.561 [2024-11-17 18:55:45.016725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.561 [2024-11-17 18:55:45.016754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.022337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.022444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.022472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.029311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.029433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.029462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.035228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.035336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.035365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.040923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.041025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.041053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.046384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.046526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.046555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.052586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.052682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.052709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.058922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.059010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.059037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.064750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.064837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.064864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.070129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.070202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.070229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.075885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.075960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.075987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.080764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.080841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.080869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.085791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.085865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.085893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.090854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.090967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.090994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.096581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.096775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.096804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.102976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.103155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.103198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.109414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.109541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.109570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.115808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.115993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.116022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.122175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.122324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.122353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.128622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.128733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.128763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.562 [2024-11-17 18:55:45.134922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.562 [2024-11-17 18:55:45.135094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.562 [2024-11-17 18:55:45.135123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.141518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.141633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.141662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.147978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.148110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.148138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.154506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.154708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.154737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.161011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.161133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.161161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.167573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.167734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.167768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.174160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.174260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.174289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.180641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.180779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.180808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.187026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.187161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.187190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.193292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.193397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.193425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.198466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.198556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.198583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.203910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.204012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.204040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.208841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.822 [2024-11-17 18:55:45.208945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.822 [2024-11-17 18:55:45.208974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.822 [2024-11-17 18:55:45.213801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.213903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.213931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.219276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.219457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.219490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.225664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.225866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.225895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.231504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.231628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.231656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.238744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.238944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.238973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.244596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.244715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.244744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.249906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.250022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.250051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.255614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.255732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.255761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.261033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.261148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.261179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.265949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.266027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.266056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.270854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.270939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.270966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.276002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.276094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.276125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.281347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.281484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.281512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.287587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.287766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.287795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.294416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.294616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.294645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.301575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.301686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.301716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.307649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.307748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.307777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.313671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.313767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.313793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.319338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.319409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.319436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.325163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.325244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.325271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.330742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.330816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.330843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.336237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.336336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.336364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.341647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.341765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.341793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.347315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.347393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.347421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.353209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.353302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.353333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.359214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.359294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.359321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.365327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.823 [2024-11-17 18:55:45.365413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.823 [2024-11-17 18:55:45.365440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.823 [2024-11-17 18:55:45.370606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.824 [2024-11-17 18:55:45.370697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.824 [2024-11-17 18:55:45.370737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:58.824 [2024-11-17 18:55:45.375811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.824 [2024-11-17 18:55:45.375892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.824 [2024-11-17 18:55:45.375919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:58.824 [2024-11-17 18:55:45.381014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.824 [2024-11-17 18:55:45.381098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.824 [2024-11-17 18:55:45.381139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:58.824 [2024-11-17 18:55:45.386041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.824 [2024-11-17 18:55:45.386133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.824 [2024-11-17 18:55:45.386160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:58.824 [2024-11-17 18:55:45.391581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:58.824 [2024-11-17 18:55:45.391683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.824 [2024-11-17 18:55:45.391715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.102 [2024-11-17 18:55:45.397900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.102 [2024-11-17 18:55:45.398092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.102 [2024-11-17 18:55:45.398122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.102 [2024-11-17 18:55:45.404470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.102 [2024-11-17 18:55:45.404585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.102 [2024-11-17 18:55:45.404613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.102 [2024-11-17 18:55:45.410816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.102 [2024-11-17 18:55:45.410959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.102 [2024-11-17 18:55:45.410988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.102 [2024-11-17 18:55:45.417314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.102 [2024-11-17 18:55:45.417477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.102 [2024-11-17 18:55:45.417506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.102 [2024-11-17 18:55:45.422592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.102 [2024-11-17 18:55:45.422733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.102 [2024-11-17 18:55:45.422762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.102 [2024-11-17 18:55:45.427768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.102 [2024-11-17 18:55:45.427881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.102 [2024-11-17 18:55:45.427910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.102 [2024-11-17 18:55:45.433069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.102 [2024-11-17 18:55:45.433175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.433202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.438130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.438203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.438230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.443495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.443662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.443701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.450606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.450705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.450732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.456524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.456631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.456660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.462472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.462566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.462598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.468058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.468158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.468188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.473600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.473685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.473713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.479288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.479417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.479446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.485097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.485222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.485250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.490386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.490485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.490514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.495406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.495514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.495542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.500469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.500577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.500606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.506780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.506917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.506946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.513115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.513247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.513275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.519314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.519425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.519459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.524393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.524489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.524518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.529393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.529500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.529529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.534358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.534462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.534490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.103 [2024-11-17 18:55:45.539549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.103 [2024-11-17 18:55:45.539645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.103 [2024-11-17 18:55:45.539680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.544737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.544869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.544898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.550401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.550515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.550543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.557177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.557309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.557338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.563847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.564041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.564070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.571282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.571477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.571507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.577937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.578026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.578053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.583721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.583807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.583835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.588508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.588600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.588632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.593605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.593716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.593745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.598851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.598938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.598965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.603773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.603908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.603936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.609789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.609962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.609991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.616109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.616202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.616229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.623427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.623540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.623568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.629559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.629669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.629704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.634808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.634897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.634924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.639711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.639795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.639822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.644617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.644711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.644739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.650382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.104 [2024-11-17 18:55:45.650472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.104 [2024-11-17 18:55:45.650503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.104 [2024-11-17 18:55:45.655622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.105 [2024-11-17 18:55:45.655708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.105 [2024-11-17 18:55:45.655736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.411 [2024-11-17 18:55:45.660500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.411 [2024-11-17 18:55:45.660591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.411 [2024-11-17 18:55:45.660623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.411 [2024-11-17 18:55:45.665547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.411 [2024-11-17 18:55:45.665630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.411 [2024-11-17 18:55:45.665669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.670579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.670682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.670715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.675905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.676043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.676073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.682123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.682282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.682312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.688743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.688884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.688913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.696112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.696266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.696296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.702656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.702829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.702858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.708790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.708994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.709023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.715389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.715515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.715544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.721855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.722057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.722086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.728828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.728906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.728934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.736341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.736548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.736577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.743312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.743444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.743473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.748439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.422 [2024-11-17 18:55:45.748589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.422 [2024-11-17 18:55:45.748618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.422 [2024-11-17 18:55:45.754004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.754129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.754157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.759634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.759759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.759789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.765072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.765173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.765204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.770180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.770286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.770314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.775281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.775450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.775478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.781554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.781780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.781809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.787839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.787990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.788019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.794448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.794568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.794597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.799488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.799583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.799610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.804547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.804664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.804700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.809689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.809773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.809800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.423 5300.00 IOPS, 662.50 MiB/s [2024-11-17T17:55:45.999Z] [2024-11-17 18:55:45.816308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.816442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.816471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.821359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.821493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.821527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.826599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.826846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.826874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.832813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.832994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.833023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.838151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.838263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.838290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.843099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.843236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.843266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.848270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.848402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.848432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.854089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.854180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.854208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.860147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.860291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.860320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.866038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.866267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.866296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.873096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.873302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.873331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.878640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.878779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.878808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.884394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.884516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.884545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.890059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.890178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.890207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.895825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.895921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.895950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.901599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.423 [2024-11-17 18:55:45.901734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.423 [2024-11-17 18:55:45.901762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.423 [2024-11-17 18:55:45.906915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.907001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.907028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.912534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.912697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.912726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.918427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.918801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.918830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.924575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.924914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.924943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.929919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.930222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.930266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.934576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.934868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.934897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.939243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.939528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.939557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.944409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.944721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.944749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.950400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.950723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.950753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.955263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.955563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.955592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.959940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.960235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.960264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.964638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.964936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.964970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.969238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.969496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.969525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.424 [2024-11-17 18:55:45.974584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.424 [2024-11-17 18:55:45.974920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.424 [2024-11-17 18:55:45.974958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.685 [2024-11-17 18:55:45.980573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.685 [2024-11-17 18:55:45.980899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.685 [2024-11-17 18:55:45.980929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.685 [2024-11-17 18:55:45.987053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.685 [2024-11-17 18:55:45.987368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.685 [2024-11-17 18:55:45.987412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.685 [2024-11-17 18:55:45.993458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.685 [2024-11-17 18:55:45.993820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.685 [2024-11-17 18:55:45.993866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.685 [2024-11-17 18:55:46.000153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.685 [2024-11-17 18:55:46.000460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.685 [2024-11-17 18:55:46.000489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.685 [2024-11-17 18:55:46.005895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.685 [2024-11-17 18:55:46.006183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.685 [2024-11-17 18:55:46.006213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.010587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.010810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.010840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.014899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.015120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.015149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.019238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.019451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.019480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.023449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.023685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.023714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.027804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.028005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.028033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.032093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.032355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.032383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.036359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.036596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.036624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.041047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.041273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.041302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.045598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.045807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.045835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.050103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.050282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.050311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.054644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.054862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.054891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.060112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.060341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.060369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.064485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.064693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.064722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.069216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.069428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.069457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.074425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.074694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.074724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.080195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.080508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.080536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.085296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.085539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.085567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.089706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.089892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.089921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.094271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.094460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.094495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.098639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.098839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.098868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.103355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.103582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.103611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.107923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.108098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.108126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.112436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.112637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.112665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.117010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.117203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.117232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.121432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.121611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.121639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.126065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.126296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.126324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.686 [2024-11-17 18:55:46.130554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.686 [2024-11-17 18:55:46.130799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.686 [2024-11-17 18:55:46.130827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.134990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.135174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.135208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.139516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.139660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.139698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.144077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.144269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.144297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.148659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.148826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.148855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.153180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.153335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.153364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.157769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.157932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.157960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.162219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.162389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.162417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.166724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.166909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.166937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.171360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.171539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.171568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.175954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.176149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.176178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.180526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.180731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.180759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.185127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.185340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.185368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.189565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.189759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.189788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.194112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.194335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.194364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.198620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.198810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.198838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.203200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.203387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.203415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.207722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.207879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.207906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.212263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.212455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.212484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.216833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.216972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.217015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.221257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.221413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.221442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.225667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.225824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.225851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.230235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.230380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.230408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.234938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.235097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.235126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.239345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.239513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.239541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.244043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.244164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.244192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.248501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.248659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.248695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.253106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.687 [2024-11-17 18:55:46.253248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.687 [2024-11-17 18:55:46.253282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.687 [2024-11-17 18:55:46.257614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.688 [2024-11-17 18:55:46.257769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.688 [2024-11-17 18:55:46.257798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.262119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.262313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.262343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.266689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.266830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.266859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.271109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.271287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.271316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.275652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.275871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.275899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.280192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.280343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.280371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.284542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.284694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.284722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.289802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.289956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.289988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.295011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.295230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.295259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.300939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.301202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.301231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.306527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.306794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.306823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.312579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.312755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.312784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.318842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.319043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.319086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.324633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.324892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.324921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.329846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.330111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.330139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.334513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.334646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.334681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.339666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.339878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.339907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.344842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.345025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.948 [2024-11-17 18:55:46.345054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.948 [2024-11-17 18:55:46.350034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.948 [2024-11-17 18:55:46.350267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.350296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.355187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.355454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.355483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.360508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.360726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.360755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.365708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.365926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.365955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.370852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.371070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.371099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.376087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.376276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.376304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.381330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.381539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.381567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.386656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.386885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.386919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.391827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.392014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.392042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.397474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.397739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.397767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.403110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.403340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.403368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.409052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.409326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.409355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.415002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.415217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.415245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.420537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.420733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.420762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.426467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.426752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.426781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.432501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.432748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.432777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.438451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.438738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.438766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.444438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.444711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.444739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.450473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.450769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.450798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.456716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.457012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.457045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.462838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.463059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.463087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.468863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.469158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.469187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.474901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.475171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.475215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.481014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.481280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.481308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.487037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.487292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.487320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.493227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.493536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.493579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.499123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.499401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.499430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:59.949 [2024-11-17 18:55:46.504968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.949 [2024-11-17 18:55:46.505276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.949 [2024-11-17 18:55:46.505304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:59.950 [2024-11-17 18:55:46.510923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.950 [2024-11-17 18:55:46.511223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.950 [2024-11-17 18:55:46.511251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:59.950 [2024-11-17 18:55:46.516786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:34:59.950 [2024-11-17 18:55:46.516968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.950 [2024-11-17 18:55:46.516996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.522369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.522447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.522476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.527009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.527077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.527105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.531171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.531246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.531275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.535348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.535423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.535456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.539467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.539544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.539572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.543974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.544089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.544118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.548969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.549170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.549199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.554150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.554263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.554289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.560081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.560223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.560252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.565184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.565323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.565352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.570307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.570507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.570535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.575451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.575601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.575629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.580532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.580688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.580716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.585736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.585892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.585920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.590754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.590955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.590984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.595784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.595920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.595948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.600847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.600999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.601028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.210 [2024-11-17 18:55:46.605866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.210 [2024-11-17 18:55:46.605954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.210 [2024-11-17 18:55:46.605981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.610904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.611051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.611079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.616075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.616243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.616271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.621131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.621312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.621341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.626302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.626520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.626549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.631452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.631629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.631678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.636270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.636365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.636392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.640508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.640612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.640640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.645178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.645289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.645317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.650442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.650512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.650539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.654701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.654783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.654810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.658933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.659053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.659079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.663251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.663329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.663360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.667548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.667639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.667667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.671878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.671963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.671990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.676162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.676240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.676267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.680422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.680490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.680516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.684626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.684737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.684765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.688788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.688878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.688906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.692934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.693023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.693050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.697160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.697245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.697271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.701386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.701479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.701506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.705573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.705641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.705667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.709842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.709924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.709950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.714007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.714091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.714117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.718253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.718320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.718346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.211 [2024-11-17 18:55:46.722571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.211 [2024-11-17 18:55:46.722640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.211 [2024-11-17 18:55:46.722666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.726990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.727078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.727104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.731704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.731853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.731881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.736782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.736931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.736959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.742252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.742373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.742402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.747934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.748027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.748060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.752256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.752337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.752364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.756710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.756810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.756838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.761060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.761151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.761181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.765615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.765696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.765723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.770044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.770180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.770209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.774858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.775004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.775033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.212 [2024-11-17 18:55:46.779914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.212 [2024-11-17 18:55:46.780047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.212 [2024-11-17 18:55:46.780087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.470 [2024-11-17 18:55:46.784121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.470 [2024-11-17 18:55:46.784233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.470 [2024-11-17 18:55:46.784264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.470 [2024-11-17 18:55:46.788431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.470 [2024-11-17 18:55:46.788535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.470 [2024-11-17 18:55:46.788563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.470 [2024-11-17 18:55:46.792777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.470 [2024-11-17 18:55:46.792875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.470 [2024-11-17 18:55:46.792903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.470 [2024-11-17 18:55:46.797082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.470 [2024-11-17 18:55:46.797181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.470 [2024-11-17 18:55:46.797210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.470 [2024-11-17 18:55:46.801910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.470 [2024-11-17 18:55:46.802015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.470 [2024-11-17 18:55:46.802043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:00.470 [2024-11-17 18:55:46.807095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.470 [2024-11-17 18:55:46.807202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.470 [2024-11-17 18:55:46.807230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:00.470 [2024-11-17 18:55:46.813082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.470 [2024-11-17 18:55:46.813208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.470 [2024-11-17 18:55:46.813237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:00.470 5746.00 IOPS, 718.25 MiB/s [2024-11-17T17:55:47.046Z] [2024-11-17 18:55:46.818957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcc5600) with pdu=0x2000166ff3c8 00:35:00.470 [2024-11-17 18:55:46.819057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.470 [2024-11-17 18:55:46.819086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:00.470 00:35:00.470 Latency(us) 00:35:00.470 [2024-11-17T17:55:47.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.470 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:00.470 nvme0n1 : 2.00 5744.25 718.03 0.00 0.00 2778.39 1808.31 13592.65 00:35:00.470 [2024-11-17T17:55:47.046Z] =================================================================================================================== 00:35:00.470 [2024-11-17T17:55:47.046Z] Total : 5744.25 718.03 0.00 0.00 2778.39 1808.31 13592.65 00:35:00.470 { 00:35:00.470 "results": [ 00:35:00.470 { 00:35:00.470 "job": "nvme0n1", 00:35:00.470 "core_mask": "0x2", 00:35:00.470 "workload": "randwrite", 00:35:00.470 "status": "finished", 00:35:00.470 "queue_depth": 16, 00:35:00.470 "io_size": 131072, 00:35:00.470 "runtime": 2.004091, 00:35:00.470 "iops": 5744.250136346104, 00:35:00.470 "mibps": 718.031267043263, 00:35:00.470 "io_failed": 0, 00:35:00.470 "io_timeout": 0, 00:35:00.470 "avg_latency_us": 2778.3906066455615, 00:35:00.470 "min_latency_us": 1808.3081481481481, 00:35:00.470 "max_latency_us": 13592.651851851851 00:35:00.470 } 00:35:00.470 ], 00:35:00.470 "core_count": 1 00:35:00.470 } 00:35:00.470 18:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:00.470 18:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:00.471 18:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:00.471 | .driver_specific 00:35:00.471 | .nvme_error 00:35:00.471 | .status_code 00:35:00.471 | .command_transient_transport_error' 00:35:00.471 18:55:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 372 > 0 )) 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 886949 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 886949 ']' 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 886949 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 886949 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 886949' 00:35:00.729 killing process with pid 886949 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 886949 00:35:00.729 Received shutdown signal, test time was about 2.000000 seconds 00:35:00.729 00:35:00.729 Latency(us) 00:35:00.729 [2024-11-17T17:55:47.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.729 [2024-11-17T17:55:47.305Z] =================================================================================================================== 00:35:00.729 [2024-11-17T17:55:47.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:00.729 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 886949 00:35:00.988 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 885637 00:35:00.988 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 885637 ']' 00:35:00.988 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 885637 00:35:00.988 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:00.988 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:00.988 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885637 00:35:00.988 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:00.988 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:00.988 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885637' 00:35:00.988 killing process with pid 885637 00:35:00.988 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 885637 00:35:00.988 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 885637 00:35:01.246 00:35:01.246 real 0m15.368s 00:35:01.246 user 0m30.900s 00:35:01.246 sys 0m4.197s 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:01.246 ************************************ 00:35:01.246 END TEST nvmf_digest_error 00:35:01.246 ************************************ 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:01.246 rmmod nvme_tcp 00:35:01.246 rmmod nvme_fabrics 00:35:01.246 rmmod nvme_keyring 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 885637 ']' 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 885637 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 885637 ']' 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 885637 00:35:01.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (885637) - No such process 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 885637 is not found' 00:35:01.246 Process with pid 885637 is not found 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.246 18:55:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:03.783 00:35:03.783 real 0m35.532s 00:35:03.783 user 1m2.917s 00:35:03.783 sys 0m10.152s 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:03.783 ************************************ 00:35:03.783 END TEST nvmf_digest 00:35:03.783 ************************************ 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.783 ************************************ 00:35:03.783 START TEST nvmf_bdevperf 00:35:03.783 ************************************ 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:03.783 * Looking for test storage... 00:35:03.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:03.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.783 --rc genhtml_branch_coverage=1 00:35:03.783 --rc genhtml_function_coverage=1 00:35:03.783 --rc genhtml_legend=1 00:35:03.783 --rc geninfo_all_blocks=1 00:35:03.783 --rc geninfo_unexecuted_blocks=1 00:35:03.783 00:35:03.783 ' 00:35:03.783 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:03.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.783 --rc genhtml_branch_coverage=1 00:35:03.783 --rc genhtml_function_coverage=1 00:35:03.783 --rc genhtml_legend=1 00:35:03.784 --rc geninfo_all_blocks=1 00:35:03.784 --rc geninfo_unexecuted_blocks=1 00:35:03.784 00:35:03.784 ' 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:03.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.784 --rc genhtml_branch_coverage=1 00:35:03.784 --rc genhtml_function_coverage=1 00:35:03.784 --rc genhtml_legend=1 00:35:03.784 --rc geninfo_all_blocks=1 00:35:03.784 --rc geninfo_unexecuted_blocks=1 00:35:03.784 00:35:03.784 ' 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:03.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.784 --rc genhtml_branch_coverage=1 00:35:03.784 --rc genhtml_function_coverage=1 00:35:03.784 --rc genhtml_legend=1 00:35:03.784 --rc geninfo_all_blocks=1 00:35:03.784 --rc geninfo_unexecuted_blocks=1 00:35:03.784 00:35:03.784 ' 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:03.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:03.784 18:55:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:05.691 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:05.691 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:05.691 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:05.691 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:05.691 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:05.692 18:55:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:05.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:05.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:35:05.692 00:35:05.692 --- 10.0.0.2 ping statistics --- 00:35:05.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.692 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:05.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:05.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:35:05.692 00:35:05.692 --- 10.0.0.1 ping statistics --- 00:35:05.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.692 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=889373 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 889373 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 889373 ']' 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.692 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.692 [2024-11-17 18:55:52.116275] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:05.692 [2024-11-17 18:55:52.116360] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.692 [2024-11-17 18:55:52.193160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:05.692 [2024-11-17 18:55:52.239668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.692 [2024-11-17 18:55:52.239724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.692 [2024-11-17 18:55:52.239755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.692 [2024-11-17 18:55:52.239767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.692 [2024-11-17 18:55:52.239777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.692 [2024-11-17 18:55:52.241271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.692 [2024-11-17 18:55:52.241326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:05.692 [2024-11-17 18:55:52.241330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.951 [2024-11-17 18:55:52.374140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.951 Malloc0 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:05.951 [2024-11-17 18:55:52.430793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:05.951 { 00:35:05.951 "params": { 00:35:05.951 "name": "Nvme$subsystem", 00:35:05.951 "trtype": "$TEST_TRANSPORT", 00:35:05.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.951 "adrfam": "ipv4", 00:35:05.951 "trsvcid": "$NVMF_PORT", 00:35:05.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.951 "hdgst": ${hdgst:-false}, 00:35:05.951 "ddgst": ${ddgst:-false} 00:35:05.951 }, 00:35:05.951 "method": "bdev_nvme_attach_controller" 00:35:05.951 } 00:35:05.951 EOF 00:35:05.951 )") 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:05.951 18:55:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:05.951 "params": { 00:35:05.951 "name": "Nvme1", 00:35:05.951 "trtype": "tcp", 00:35:05.951 "traddr": "10.0.0.2", 00:35:05.951 "adrfam": "ipv4", 00:35:05.951 "trsvcid": "4420", 00:35:05.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:05.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:05.951 "hdgst": false, 00:35:05.951 "ddgst": false 00:35:05.951 }, 00:35:05.951 "method": "bdev_nvme_attach_controller" 00:35:05.951 }' 00:35:05.951 [2024-11-17 18:55:52.478850] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:05.951 [2024-11-17 18:55:52.478926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889397 ] 00:35:06.209 [2024-11-17 18:55:52.548660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.209 [2024-11-17 18:55:52.595269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.209 Running I/O for 1 seconds... 00:35:07.584 8533.00 IOPS, 33.33 MiB/s 00:35:07.584 Latency(us) 00:35:07.584 [2024-11-17T17:55:54.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.584 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:07.584 Verification LBA range: start 0x0 length 0x4000 00:35:07.584 Nvme1n1 : 1.01 8599.26 33.59 0.00 0.00 14810.17 1808.31 14951.92 00:35:07.584 [2024-11-17T17:55:54.160Z] =================================================================================================================== 00:35:07.584 [2024-11-17T17:55:54.160Z] Total : 8599.26 33.59 0.00 0.00 14810.17 1808.31 14951.92 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=889650 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:07.584 { 00:35:07.584 "params": { 00:35:07.584 "name": "Nvme$subsystem", 00:35:07.584 "trtype": "$TEST_TRANSPORT", 00:35:07.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:07.584 "adrfam": "ipv4", 00:35:07.584 "trsvcid": "$NVMF_PORT", 00:35:07.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:07.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:07.584 "hdgst": ${hdgst:-false}, 00:35:07.584 "ddgst": ${ddgst:-false} 00:35:07.584 }, 00:35:07.584 "method": "bdev_nvme_attach_controller" 00:35:07.584 } 00:35:07.584 EOF 00:35:07.584 )") 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:07.584 18:55:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:07.584 "params": { 00:35:07.584 "name": "Nvme1", 00:35:07.584 "trtype": "tcp", 00:35:07.584 "traddr": "10.0.0.2", 00:35:07.584 "adrfam": "ipv4", 00:35:07.584 "trsvcid": "4420", 00:35:07.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:07.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:07.584 "hdgst": false, 00:35:07.584 "ddgst": false 00:35:07.584 }, 00:35:07.584 "method": "bdev_nvme_attach_controller" 00:35:07.584 }' 00:35:07.584 [2024-11-17 18:55:54.020767] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:07.584 [2024-11-17 18:55:54.020853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889650 ] 00:35:07.584 [2024-11-17 18:55:54.092091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.584 [2024-11-17 18:55:54.136995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.841 Running I/O for 15 seconds... 00:35:10.146 8537.00 IOPS, 33.35 MiB/s [2024-11-17T17:55:56.980Z] 8646.00 IOPS, 33.77 MiB/s [2024-11-17T17:55:56.980Z] 18:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 889373 00:35:10.404 18:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:10.665 [2024-11-17 18:55:56.987555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.987601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.987648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.987681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.987702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.987720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.987745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.987763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.987779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.987795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.987811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.987826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.987843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.987858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.987875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.987889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.987906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.987921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.987938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.987968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.987985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.987999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.988030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.988044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.988059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.988072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.988103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.988117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.988132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.988146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.988160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.988174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.988194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.665 [2024-11-17 18:55:56.988209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.988224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.988237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.988252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.988266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.665 [2024-11-17 18:55:56.988281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.665 [2024-11-17 18:55:56.988295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.988953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.988981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.666 [2024-11-17 18:55:56.989317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.666 [2024-11-17 18:55:56.989330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.667 [2024-11-17 18:55:56.989682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.989716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.989752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.989784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.989814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.989844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.989874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.989905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.989935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.989959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.989987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.667 [2024-11-17 18:55:56.990330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.667 [2024-11-17 18:55:56.990343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.990979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.990993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.668 [2024-11-17 18:55:56.991336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.668 [2024-11-17 18:55:56.991349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.669 [2024-11-17 18:55:56.991362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.669 [2024-11-17 18:55:56.991389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.669 [2024-11-17 18:55:56.991415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.669 [2024-11-17 18:55:56.991442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.669 [2024-11-17 18:55:56.991469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.669 [2024-11-17 18:55:56.991495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.669 [2024-11-17 18:55:56.991521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2183c20 is same with the state(6) to be set 00:35:10.669 [2024-11-17 18:55:56.991550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:10.669 [2024-11-17 18:55:56.991560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:10.669 [2024-11-17 18:55:56.991571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53776 len:8 PRP1 0x0 PRP2 0x0 00:35:10.669 [2024-11-17 18:55:56.991582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.669 [2024-11-17 18:55:56.991754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.669 [2024-11-17 18:55:56.991785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.669 [2024-11-17 18:55:56.991813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.669 [2024-11-17 18:55:56.991841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.669 [2024-11-17 18:55:56.991854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.669 [2024-11-17 18:55:56.994977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.669 [2024-11-17 18:55:56.995011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.669 [2024-11-17 18:55:56.995608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.669 [2024-11-17 18:55:56.995637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.669 [2024-11-17 18:55:56.995669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.669 [2024-11-17 18:55:56.995896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.669 [2024-11-17 18:55:56.996131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.669 [2024-11-17 18:55:56.996150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.669 [2024-11-17 18:55:56.996164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.669 [2024-11-17 18:55:56.996178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.669 [2024-11-17 18:55:57.008415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.669 [2024-11-17 18:55:57.008840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.669 [2024-11-17 18:55:57.008869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.669 [2024-11-17 18:55:57.008886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.669 [2024-11-17 18:55:57.009130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.669 [2024-11-17 18:55:57.009338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.669 [2024-11-17 18:55:57.009357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.669 [2024-11-17 18:55:57.009369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.669 [2024-11-17 18:55:57.009381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.669 [2024-11-17 18:55:57.021474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.669 [2024-11-17 18:55:57.021799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.669 [2024-11-17 18:55:57.021825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.669 [2024-11-17 18:55:57.021840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.669 [2024-11-17 18:55:57.022035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.669 [2024-11-17 18:55:57.022242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.669 [2024-11-17 18:55:57.022261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.669 [2024-11-17 18:55:57.022274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.669 [2024-11-17 18:55:57.022285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.669 [2024-11-17 18:55:57.034564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.669 [2024-11-17 18:55:57.034962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.669 [2024-11-17 18:55:57.034992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.669 [2024-11-17 18:55:57.035009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.669 [2024-11-17 18:55:57.035251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.669 [2024-11-17 18:55:57.035459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.669 [2024-11-17 18:55:57.035477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.669 [2024-11-17 18:55:57.035490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.669 [2024-11-17 18:55:57.035501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.669 [2024-11-17 18:55:57.047641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.669 [2024-11-17 18:55:57.048054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.669 [2024-11-17 18:55:57.048096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.669 [2024-11-17 18:55:57.048112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.669 [2024-11-17 18:55:57.048348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.669 [2024-11-17 18:55:57.048539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.669 [2024-11-17 18:55:57.048558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.669 [2024-11-17 18:55:57.048570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.669 [2024-11-17 18:55:57.048581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.669 [2024-11-17 18:55:57.060743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.669 [2024-11-17 18:55:57.061144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.669 [2024-11-17 18:55:57.061187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.670 [2024-11-17 18:55:57.061208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.670 [2024-11-17 18:55:57.061459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.670 [2024-11-17 18:55:57.061652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.670 [2024-11-17 18:55:57.061705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.670 [2024-11-17 18:55:57.061719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.670 [2024-11-17 18:55:57.061732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.670 [2024-11-17 18:55:57.073831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.670 [2024-11-17 18:55:57.074204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.670 [2024-11-17 18:55:57.074232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.670 [2024-11-17 18:55:57.074249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.670 [2024-11-17 18:55:57.074490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.670 [2024-11-17 18:55:57.074726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.670 [2024-11-17 18:55:57.074746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.670 [2024-11-17 18:55:57.074759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.670 [2024-11-17 18:55:57.074772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.670 [2024-11-17 18:55:57.086942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.670 [2024-11-17 18:55:57.087304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.670 [2024-11-17 18:55:57.087331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.670 [2024-11-17 18:55:57.087347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.670 [2024-11-17 18:55:57.087582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.670 [2024-11-17 18:55:57.087800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.670 [2024-11-17 18:55:57.087819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.670 [2024-11-17 18:55:57.087831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.670 [2024-11-17 18:55:57.087843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.670 [2024-11-17 18:55:57.100029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.670 [2024-11-17 18:55:57.100409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.670 [2024-11-17 18:55:57.100449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.670 [2024-11-17 18:55:57.100464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.670 [2024-11-17 18:55:57.100722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.670 [2024-11-17 18:55:57.100921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.670 [2024-11-17 18:55:57.100939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.670 [2024-11-17 18:55:57.100951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.670 [2024-11-17 18:55:57.100963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.670 [2024-11-17 18:55:57.113217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.670 [2024-11-17 18:55:57.113714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.670 [2024-11-17 18:55:57.113760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.670 [2024-11-17 18:55:57.113778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.670 [2024-11-17 18:55:57.114044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.670 [2024-11-17 18:55:57.114237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.670 [2024-11-17 18:55:57.114255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.670 [2024-11-17 18:55:57.114267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.670 [2024-11-17 18:55:57.114279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.670 [2024-11-17 18:55:57.126262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.670 [2024-11-17 18:55:57.126706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.670 [2024-11-17 18:55:57.126735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.670 [2024-11-17 18:55:57.126751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.670 [2024-11-17 18:55:57.127019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.670 [2024-11-17 18:55:57.127239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.670 [2024-11-17 18:55:57.127259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.670 [2024-11-17 18:55:57.127271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.670 [2024-11-17 18:55:57.127283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.670 [2024-11-17 18:55:57.139366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.670 [2024-11-17 18:55:57.139729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.670 [2024-11-17 18:55:57.139770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.670 [2024-11-17 18:55:57.139786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.670 [2024-11-17 18:55:57.140034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.670 [2024-11-17 18:55:57.140226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.670 [2024-11-17 18:55:57.140244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.670 [2024-11-17 18:55:57.140262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.670 [2024-11-17 18:55:57.140274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.670 [2024-11-17 18:55:57.152511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.670 [2024-11-17 18:55:57.152915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.670 [2024-11-17 18:55:57.152942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.671 [2024-11-17 18:55:57.152957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.671 [2024-11-17 18:55:57.153191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.671 [2024-11-17 18:55:57.153398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.671 [2024-11-17 18:55:57.153416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.671 [2024-11-17 18:55:57.153429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.671 [2024-11-17 18:55:57.153440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.671 [2024-11-17 18:55:57.165730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.671 [2024-11-17 18:55:57.166128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.671 [2024-11-17 18:55:57.166156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.671 [2024-11-17 18:55:57.166171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.671 [2024-11-17 18:55:57.166407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.671 [2024-11-17 18:55:57.166616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.671 [2024-11-17 18:55:57.166634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.671 [2024-11-17 18:55:57.166647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.671 [2024-11-17 18:55:57.166682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.671 [2024-11-17 18:55:57.178717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.671 [2024-11-17 18:55:57.179210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.671 [2024-11-17 18:55:57.179252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.671 [2024-11-17 18:55:57.179268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.671 [2024-11-17 18:55:57.179518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.671 [2024-11-17 18:55:57.179753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.671 [2024-11-17 18:55:57.179773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.671 [2024-11-17 18:55:57.179786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.671 [2024-11-17 18:55:57.179798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.671 [2024-11-17 18:55:57.191905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.671 [2024-11-17 18:55:57.192357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.671 [2024-11-17 18:55:57.192399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.671 [2024-11-17 18:55:57.192415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.671 [2024-11-17 18:55:57.192666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.671 [2024-11-17 18:55:57.192875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.671 [2024-11-17 18:55:57.192895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.671 [2024-11-17 18:55:57.192908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.671 [2024-11-17 18:55:57.192920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.671 [2024-11-17 18:55:57.204939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.671 [2024-11-17 18:55:57.205300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.671 [2024-11-17 18:55:57.205327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.671 [2024-11-17 18:55:57.205343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.671 [2024-11-17 18:55:57.205578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.671 [2024-11-17 18:55:57.205816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.671 [2024-11-17 18:55:57.205836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.671 [2024-11-17 18:55:57.205849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.671 [2024-11-17 18:55:57.205861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.671 [2024-11-17 18:55:57.218150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.671 [2024-11-17 18:55:57.218514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.671 [2024-11-17 18:55:57.218556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.671 [2024-11-17 18:55:57.218572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.671 [2024-11-17 18:55:57.218850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.671 [2024-11-17 18:55:57.219062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.671 [2024-11-17 18:55:57.219080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.671 [2024-11-17 18:55:57.219092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.671 [2024-11-17 18:55:57.219104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.671 [2024-11-17 18:55:57.231126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.671 [2024-11-17 18:55:57.231550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.671 [2024-11-17 18:55:57.231592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.671 [2024-11-17 18:55:57.231614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.671 [2024-11-17 18:55:57.231866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.671 [2024-11-17 18:55:57.232078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.671 [2024-11-17 18:55:57.232096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.671 [2024-11-17 18:55:57.232108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.671 [2024-11-17 18:55:57.232119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.931 [2024-11-17 18:55:57.244290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.931 [2024-11-17 18:55:57.244705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.931 [2024-11-17 18:55:57.244751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.931 [2024-11-17 18:55:57.244768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.931 [2024-11-17 18:55:57.245047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.931 [2024-11-17 18:55:57.245270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.931 [2024-11-17 18:55:57.245291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.931 [2024-11-17 18:55:57.245318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.931 [2024-11-17 18:55:57.245331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.931 [2024-11-17 18:55:57.257997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.931 [2024-11-17 18:55:57.258366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.931 [2024-11-17 18:55:57.258461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.931 [2024-11-17 18:55:57.258479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.931 [2024-11-17 18:55:57.258764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.931 [2024-11-17 18:55:57.258987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.931 [2024-11-17 18:55:57.259010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.931 [2024-11-17 18:55:57.259023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.931 [2024-11-17 18:55:57.259036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.931 [2024-11-17 18:55:57.271082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.931 [2024-11-17 18:55:57.271515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.931 [2024-11-17 18:55:57.271558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.931 [2024-11-17 18:55:57.271576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.931 [2024-11-17 18:55:57.271830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.931 [2024-11-17 18:55:57.272049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.931 [2024-11-17 18:55:57.272068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.931 [2024-11-17 18:55:57.272081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.931 [2024-11-17 18:55:57.272093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.931 [2024-11-17 18:55:57.284252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.931 [2024-11-17 18:55:57.284667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.931 [2024-11-17 18:55:57.284728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.931 [2024-11-17 18:55:57.284745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.931 [2024-11-17 18:55:57.284992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.931 [2024-11-17 18:55:57.285202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.931 [2024-11-17 18:55:57.285220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.931 [2024-11-17 18:55:57.285233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.931 [2024-11-17 18:55:57.285244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.931 [2024-11-17 18:55:57.297665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.931 [2024-11-17 18:55:57.298015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.931 [2024-11-17 18:55:57.298042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.931 [2024-11-17 18:55:57.298058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.931 [2024-11-17 18:55:57.298285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.931 [2024-11-17 18:55:57.298494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.931 [2024-11-17 18:55:57.298513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.931 [2024-11-17 18:55:57.298525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.931 [2024-11-17 18:55:57.298537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.931 [2024-11-17 18:55:57.310874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.931 [2024-11-17 18:55:57.311226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.931 [2024-11-17 18:55:57.311255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.931 [2024-11-17 18:55:57.311271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.931 [2024-11-17 18:55:57.311494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.931 [2024-11-17 18:55:57.311730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.931 [2024-11-17 18:55:57.311750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.931 [2024-11-17 18:55:57.311768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.931 [2024-11-17 18:55:57.311780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.931 [2024-11-17 18:55:57.324140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.931 [2024-11-17 18:55:57.324508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.931 [2024-11-17 18:55:57.324535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.931 [2024-11-17 18:55:57.324551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.931 [2024-11-17 18:55:57.324783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.931 [2024-11-17 18:55:57.325012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.931 [2024-11-17 18:55:57.325031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.931 [2024-11-17 18:55:57.325044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.931 [2024-11-17 18:55:57.325055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.931 7678.33 IOPS, 29.99 MiB/s [2024-11-17T17:55:57.507Z] [2024-11-17 18:55:57.337263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.931 [2024-11-17 18:55:57.337691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.931 [2024-11-17 18:55:57.337735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.931 [2024-11-17 18:55:57.337751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.931 [2024-11-17 18:55:57.337993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.931 [2024-11-17 18:55:57.338200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.931 [2024-11-17 18:55:57.338219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.932 [2024-11-17 18:55:57.338231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.932 [2024-11-17 18:55:57.338243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.932 [2024-11-17 18:55:57.350368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.932 [2024-11-17 18:55:57.350794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.932 [2024-11-17 18:55:57.350839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.932 [2024-11-17 18:55:57.350856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.932 [2024-11-17 18:55:57.351096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.932 [2024-11-17 18:55:57.351304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.932 [2024-11-17 18:55:57.351322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.932 [2024-11-17 18:55:57.351334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.932 [2024-11-17 18:55:57.351345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.932 [2024-11-17 18:55:57.363517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.932 [2024-11-17 18:55:57.363880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.932 [2024-11-17 18:55:57.363909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.932 [2024-11-17 18:55:57.363926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.932 [2024-11-17 18:55:57.364179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.932 [2024-11-17 18:55:57.364372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.932 [2024-11-17 18:55:57.364390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.932 [2024-11-17 18:55:57.364402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.932 [2024-11-17 18:55:57.364414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.932 [2024-11-17 18:55:57.376716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.932 [2024-11-17 18:55:57.377058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.932 [2024-11-17 18:55:57.377086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.932 [2024-11-17 18:55:57.377102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.932 [2024-11-17 18:55:57.377328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.932 [2024-11-17 18:55:57.377536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.932 [2024-11-17 18:55:57.377554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.932 [2024-11-17 18:55:57.377566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.932 [2024-11-17 18:55:57.377578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.932 [2024-11-17 18:55:57.389904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.932 [2024-11-17 18:55:57.390287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.932 [2024-11-17 18:55:57.390331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.932 [2024-11-17 18:55:57.390347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.932 [2024-11-17 18:55:57.390601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.932 [2024-11-17 18:55:57.390837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.932 [2024-11-17 18:55:57.390857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.932 [2024-11-17 18:55:57.390870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.932 [2024-11-17 18:55:57.390882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.932 [2024-11-17 18:55:57.403021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.932 [2024-11-17 18:55:57.403437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.932 [2024-11-17 18:55:57.403487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.932 [2024-11-17 18:55:57.403507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.932 [2024-11-17 18:55:57.403762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.932 [2024-11-17 18:55:57.403975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.932 [2024-11-17 18:55:57.403993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.932 [2024-11-17 18:55:57.404006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.932 [2024-11-17 18:55:57.404018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.932 [2024-11-17 18:55:57.416123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.932 [2024-11-17 18:55:57.416503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.932 [2024-11-17 18:55:57.416544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.932 [2024-11-17 18:55:57.416558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.932 [2024-11-17 18:55:57.416831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.932 [2024-11-17 18:55:57.417045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.932 [2024-11-17 18:55:57.417064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.932 [2024-11-17 18:55:57.417076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.932 [2024-11-17 18:55:57.417088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.932 [2024-11-17 18:55:57.429351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.932 [2024-11-17 18:55:57.429693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.932 [2024-11-17 18:55:57.429721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.932 [2024-11-17 18:55:57.429738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.932 [2024-11-17 18:55:57.429966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.932 [2024-11-17 18:55:57.430173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.932 [2024-11-17 18:55:57.430191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.932 [2024-11-17 18:55:57.430203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.932 [2024-11-17 18:55:57.430215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.932 [2024-11-17 18:55:57.442443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.932 [2024-11-17 18:55:57.442877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.932 [2024-11-17 18:55:57.442920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.932 [2024-11-17 18:55:57.442937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.932 [2024-11-17 18:55:57.443186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.932 [2024-11-17 18:55:57.443386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.932 [2024-11-17 18:55:57.443405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.932 [2024-11-17 18:55:57.443417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.932 [2024-11-17 18:55:57.443429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.932 [2024-11-17 18:55:57.455499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.932 [2024-11-17 18:55:57.455870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.932 [2024-11-17 18:55:57.455912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.932 [2024-11-17 18:55:57.455928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.932 [2024-11-17 18:55:57.456174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.932 [2024-11-17 18:55:57.456366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.932 [2024-11-17 18:55:57.456384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.932 [2024-11-17 18:55:57.456397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.932 [2024-11-17 18:55:57.456408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.932 [2024-11-17 18:55:57.468701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.933 [2024-11-17 18:55:57.469192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.933 [2024-11-17 18:55:57.469234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.933 [2024-11-17 18:55:57.469250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.933 [2024-11-17 18:55:57.469502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.933 [2024-11-17 18:55:57.469738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.933 [2024-11-17 18:55:57.469758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.933 [2024-11-17 18:55:57.469771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.933 [2024-11-17 18:55:57.469783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.933 [2024-11-17 18:55:57.481838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.933 [2024-11-17 18:55:57.482329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.933 [2024-11-17 18:55:57.482371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.933 [2024-11-17 18:55:57.482389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.933 [2024-11-17 18:55:57.482640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.933 [2024-11-17 18:55:57.482878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.933 [2024-11-17 18:55:57.482898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.933 [2024-11-17 18:55:57.482915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.933 [2024-11-17 18:55:57.482928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:10.933 [2024-11-17 18:55:57.494944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:10.933 [2024-11-17 18:55:57.495361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:10.933 [2024-11-17 18:55:57.495390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:10.933 [2024-11-17 18:55:57.495407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:10.933 [2024-11-17 18:55:57.495656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:10.933 [2024-11-17 18:55:57.495894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:10.933 [2024-11-17 18:55:57.495916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:10.933 [2024-11-17 18:55:57.495930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:10.933 [2024-11-17 18:55:57.495943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.192 [2024-11-17 18:55:57.508437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.192 [2024-11-17 18:55:57.508800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.192 [2024-11-17 18:55:57.508843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.192 [2024-11-17 18:55:57.508859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.192 [2024-11-17 18:55:57.509107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.192 [2024-11-17 18:55:57.509300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.192 [2024-11-17 18:55:57.509318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.192 [2024-11-17 18:55:57.509330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.192 [2024-11-17 18:55:57.509342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.192 [2024-11-17 18:55:57.521693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.192 [2024-11-17 18:55:57.522093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.192 [2024-11-17 18:55:57.522121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.192 [2024-11-17 18:55:57.522137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.192 [2024-11-17 18:55:57.522359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.192 [2024-11-17 18:55:57.522585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.192 [2024-11-17 18:55:57.522603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.192 [2024-11-17 18:55:57.522616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.192 [2024-11-17 18:55:57.522628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.193 [2024-11-17 18:55:57.534788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.193 [2024-11-17 18:55:57.535176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.193 [2024-11-17 18:55:57.535219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.193 [2024-11-17 18:55:57.535235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.193 [2024-11-17 18:55:57.535488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.193 [2024-11-17 18:55:57.535723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.193 [2024-11-17 18:55:57.535743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.193 [2024-11-17 18:55:57.535756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.193 [2024-11-17 18:55:57.535768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.193 [2024-11-17 18:55:57.547864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.193 [2024-11-17 18:55:57.548261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.193 [2024-11-17 18:55:57.548287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.193 [2024-11-17 18:55:57.548302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.193 [2024-11-17 18:55:57.548532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.193 [2024-11-17 18:55:57.548768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.193 [2024-11-17 18:55:57.548788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.193 [2024-11-17 18:55:57.548801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.193 [2024-11-17 18:55:57.548813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.193 [2024-11-17 18:55:57.561105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.193 [2024-11-17 18:55:57.561470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.193 [2024-11-17 18:55:57.561514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.193 [2024-11-17 18:55:57.561530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.193 [2024-11-17 18:55:57.561808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.193 [2024-11-17 18:55:57.562020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.193 [2024-11-17 18:55:57.562038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.193 [2024-11-17 18:55:57.562051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.193 [2024-11-17 18:55:57.562062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.193 [2024-11-17 18:55:57.574107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.193 [2024-11-17 18:55:57.574501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.193 [2024-11-17 18:55:57.574528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.193 [2024-11-17 18:55:57.574549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.193 [2024-11-17 18:55:57.574801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.193 [2024-11-17 18:55:57.575016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.193 [2024-11-17 18:55:57.575036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.193 [2024-11-17 18:55:57.575048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.193 [2024-11-17 18:55:57.575060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.193 [2024-11-17 18:55:57.587276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.193 [2024-11-17 18:55:57.587578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.193 [2024-11-17 18:55:57.587620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.193 [2024-11-17 18:55:57.587635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.193 [2024-11-17 18:55:57.587879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.193 [2024-11-17 18:55:57.588105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.193 [2024-11-17 18:55:57.588124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.193 [2024-11-17 18:55:57.588136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.193 [2024-11-17 18:55:57.588148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.193 [2024-11-17 18:55:57.600418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.193 [2024-11-17 18:55:57.600799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.193 [2024-11-17 18:55:57.600828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.193 [2024-11-17 18:55:57.600845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.193 [2024-11-17 18:55:57.601099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.193 [2024-11-17 18:55:57.601308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.193 [2024-11-17 18:55:57.601327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.193 [2024-11-17 18:55:57.601339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.193 [2024-11-17 18:55:57.601351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.193 [2024-11-17 18:55:57.613629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.193 [2024-11-17 18:55:57.614005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.193 [2024-11-17 18:55:57.614048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.193 [2024-11-17 18:55:57.614064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.193 [2024-11-17 18:55:57.614315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.193 [2024-11-17 18:55:57.614528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.193 [2024-11-17 18:55:57.614547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.193 [2024-11-17 18:55:57.614559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.193 [2024-11-17 18:55:57.614571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.193 [2024-11-17 18:55:57.626723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.193 [2024-11-17 18:55:57.627088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.193 [2024-11-17 18:55:57.627114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.193 [2024-11-17 18:55:57.627130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.193 [2024-11-17 18:55:57.627331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.193 [2024-11-17 18:55:57.627555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.193 [2024-11-17 18:55:57.627574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.193 [2024-11-17 18:55:57.627586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.193 [2024-11-17 18:55:57.627597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.193 [2024-11-17 18:55:57.639825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.193 [2024-11-17 18:55:57.640147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.193 [2024-11-17 18:55:57.640174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.193 [2024-11-17 18:55:57.640189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.194 [2024-11-17 18:55:57.640384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.194 [2024-11-17 18:55:57.640592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.194 [2024-11-17 18:55:57.640610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.194 [2024-11-17 18:55:57.640623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.194 [2024-11-17 18:55:57.640634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.194 [2024-11-17 18:55:57.652971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.194 [2024-11-17 18:55:57.653378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.194 [2024-11-17 18:55:57.653406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.194 [2024-11-17 18:55:57.653422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.194 [2024-11-17 18:55:57.653647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.194 [2024-11-17 18:55:57.653883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.194 [2024-11-17 18:55:57.653903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.194 [2024-11-17 18:55:57.653930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.194 [2024-11-17 18:55:57.653943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.194 [2024-11-17 18:55:57.666078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.194 [2024-11-17 18:55:57.666503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.194 [2024-11-17 18:55:57.666530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.194 [2024-11-17 18:55:57.666546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.194 [2024-11-17 18:55:57.666807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.194 [2024-11-17 18:55:57.667020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.194 [2024-11-17 18:55:57.667039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.194 [2024-11-17 18:55:57.667051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.194 [2024-11-17 18:55:57.667064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.194 [2024-11-17 18:55:57.679219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.194 [2024-11-17 18:55:57.679591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.194 [2024-11-17 18:55:57.679634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.194 [2024-11-17 18:55:57.679651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.194 [2024-11-17 18:55:57.679916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.194 [2024-11-17 18:55:57.680126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.194 [2024-11-17 18:55:57.680144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.194 [2024-11-17 18:55:57.680156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.194 [2024-11-17 18:55:57.680168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.194 [2024-11-17 18:55:57.692220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.194 [2024-11-17 18:55:57.692709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.194 [2024-11-17 18:55:57.692761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.194 [2024-11-17 18:55:57.692777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.194 [2024-11-17 18:55:57.693032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.194 [2024-11-17 18:55:57.693239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.194 [2024-11-17 18:55:57.693258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.194 [2024-11-17 18:55:57.693270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.194 [2024-11-17 18:55:57.693281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.194 [2024-11-17 18:55:57.705416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.194 [2024-11-17 18:55:57.705900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.194 [2024-11-17 18:55:57.705946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.194 [2024-11-17 18:55:57.705963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.194 [2024-11-17 18:55:57.706210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.194 [2024-11-17 18:55:57.706402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.194 [2024-11-17 18:55:57.706421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.194 [2024-11-17 18:55:57.706433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.194 [2024-11-17 18:55:57.706445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.194 [2024-11-17 18:55:57.718414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.194 [2024-11-17 18:55:57.718782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.194 [2024-11-17 18:55:57.718824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.194 [2024-11-17 18:55:57.718840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.194 [2024-11-17 18:55:57.719091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.194 [2024-11-17 18:55:57.719298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.194 [2024-11-17 18:55:57.719317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.194 [2024-11-17 18:55:57.719330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.194 [2024-11-17 18:55:57.719342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.194 [2024-11-17 18:55:57.731630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.194 [2024-11-17 18:55:57.732050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.194 [2024-11-17 18:55:57.732085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.194 [2024-11-17 18:55:57.732104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.194 [2024-11-17 18:55:57.732334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.194 [2024-11-17 18:55:57.732543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.194 [2024-11-17 18:55:57.732562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.194 [2024-11-17 18:55:57.732574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.194 [2024-11-17 18:55:57.732586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.194 [2024-11-17 18:55:57.744718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.194 [2024-11-17 18:55:57.745083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.195 [2024-11-17 18:55:57.745111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.195 [2024-11-17 18:55:57.745134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.195 [2024-11-17 18:55:57.745398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.195 [2024-11-17 18:55:57.745611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.195 [2024-11-17 18:55:57.745632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.195 [2024-11-17 18:55:57.745645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.195 [2024-11-17 18:55:57.745657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.195 [2024-11-17 18:55:57.757926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.195 [2024-11-17 18:55:57.758376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.195 [2024-11-17 18:55:57.758421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.195 [2024-11-17 18:55:57.758438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.195 [2024-11-17 18:55:57.758690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.195 [2024-11-17 18:55:57.758890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.195 [2024-11-17 18:55:57.758909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.195 [2024-11-17 18:55:57.758922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.195 [2024-11-17 18:55:57.758934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.454 [2024-11-17 18:55:57.771118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.454 [2024-11-17 18:55:57.771533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.454 [2024-11-17 18:55:57.771585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.454 [2024-11-17 18:55:57.771601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.454 [2024-11-17 18:55:57.771859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.454 [2024-11-17 18:55:57.772078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.454 [2024-11-17 18:55:57.772112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.454 [2024-11-17 18:55:57.772127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.454 [2024-11-17 18:55:57.772140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.454 [2024-11-17 18:55:57.784132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.454 [2024-11-17 18:55:57.784484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.454 [2024-11-17 18:55:57.784550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.454 [2024-11-17 18:55:57.784566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.454 [2024-11-17 18:55:57.784814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.454 [2024-11-17 18:55:57.785028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.454 [2024-11-17 18:55:57.785047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.454 [2024-11-17 18:55:57.785059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.454 [2024-11-17 18:55:57.785071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.454 [2024-11-17 18:55:57.797188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.455 [2024-11-17 18:55:57.797596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.455 [2024-11-17 18:55:57.797660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.455 [2024-11-17 18:55:57.797686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.455 [2024-11-17 18:55:57.797958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.455 [2024-11-17 18:55:57.798166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.455 [2024-11-17 18:55:57.798184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.455 [2024-11-17 18:55:57.798196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.455 [2024-11-17 18:55:57.798208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.455 [2024-11-17 18:55:57.810274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.455 [2024-11-17 18:55:57.810614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.455 [2024-11-17 18:55:57.810695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.455 [2024-11-17 18:55:57.810713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.455 [2024-11-17 18:55:57.810951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.455 [2024-11-17 18:55:57.811159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.455 [2024-11-17 18:55:57.811177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.455 [2024-11-17 18:55:57.811190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.455 [2024-11-17 18:55:57.811201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.455 [2024-11-17 18:55:57.823325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.455 [2024-11-17 18:55:57.823690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.455 [2024-11-17 18:55:57.823733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.455 [2024-11-17 18:55:57.823749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.455 [2024-11-17 18:55:57.823996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.455 [2024-11-17 18:55:57.824188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.455 [2024-11-17 18:55:57.824206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.455 [2024-11-17 18:55:57.824224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.455 [2024-11-17 18:55:57.824236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.455 [2024-11-17 18:55:57.836391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.455 [2024-11-17 18:55:57.836758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.455 [2024-11-17 18:55:57.836786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.455 [2024-11-17 18:55:57.836802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.455 [2024-11-17 18:55:57.837038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.455 [2024-11-17 18:55:57.837246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.455 [2024-11-17 18:55:57.837265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.455 [2024-11-17 18:55:57.837277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.455 [2024-11-17 18:55:57.837289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.455 [2024-11-17 18:55:57.849494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.455 [2024-11-17 18:55:57.849870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.455 [2024-11-17 18:55:57.849913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.455 [2024-11-17 18:55:57.849930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.455 [2024-11-17 18:55:57.850182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.455 [2024-11-17 18:55:57.850374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.455 [2024-11-17 18:55:57.850403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.455 [2024-11-17 18:55:57.850415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.455 [2024-11-17 18:55:57.850427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.455 [2024-11-17 18:55:57.862601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.455 [2024-11-17 18:55:57.862974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.455 [2024-11-17 18:55:57.863039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.455 [2024-11-17 18:55:57.863055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.455 [2024-11-17 18:55:57.863290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.455 [2024-11-17 18:55:57.863497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.455 [2024-11-17 18:55:57.863516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.455 [2024-11-17 18:55:57.863528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.455 [2024-11-17 18:55:57.863540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.455 [2024-11-17 18:55:57.875841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.455 [2024-11-17 18:55:57.876266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.455 [2024-11-17 18:55:57.876307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.455 [2024-11-17 18:55:57.876324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.455 [2024-11-17 18:55:57.876559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.455 [2024-11-17 18:55:57.876782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.455 [2024-11-17 18:55:57.876802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.455 [2024-11-17 18:55:57.876815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.455 [2024-11-17 18:55:57.876827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.455 [2024-11-17 18:55:57.889085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.455 [2024-11-17 18:55:57.889420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.455 [2024-11-17 18:55:57.889447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.455 [2024-11-17 18:55:57.889462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.455 [2024-11-17 18:55:57.889696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.455 [2024-11-17 18:55:57.889910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.455 [2024-11-17 18:55:57.889929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.455 [2024-11-17 18:55:57.889942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.455 [2024-11-17 18:55:57.889953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.455 [2024-11-17 18:55:57.902188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.455 [2024-11-17 18:55:57.902515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.456 [2024-11-17 18:55:57.902542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.456 [2024-11-17 18:55:57.902558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.456 [2024-11-17 18:55:57.902791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.456 [2024-11-17 18:55:57.902999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.456 [2024-11-17 18:55:57.903017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.456 [2024-11-17 18:55:57.903030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.456 [2024-11-17 18:55:57.903042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.456 [2024-11-17 18:55:57.915390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.456 [2024-11-17 18:55:57.915819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.456 [2024-11-17 18:55:57.915862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.456 [2024-11-17 18:55:57.915883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.456 [2024-11-17 18:55:57.916135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.456 [2024-11-17 18:55:57.916327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.456 [2024-11-17 18:55:57.916345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.456 [2024-11-17 18:55:57.916357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.456 [2024-11-17 18:55:57.916369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.456 [2024-11-17 18:55:57.928491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.456 [2024-11-17 18:55:57.928861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.456 [2024-11-17 18:55:57.928888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.456 [2024-11-17 18:55:57.928903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.456 [2024-11-17 18:55:57.929117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.456 [2024-11-17 18:55:57.929324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.456 [2024-11-17 18:55:57.929342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.456 [2024-11-17 18:55:57.929354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.456 [2024-11-17 18:55:57.929365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.456 [2024-11-17 18:55:57.941580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.456 [2024-11-17 18:55:57.941972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.456 [2024-11-17 18:55:57.942016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.456 [2024-11-17 18:55:57.942032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.456 [2024-11-17 18:55:57.942282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.456 [2024-11-17 18:55:57.942490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.456 [2024-11-17 18:55:57.942507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.456 [2024-11-17 18:55:57.942519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.456 [2024-11-17 18:55:57.942531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.456 [2024-11-17 18:55:57.954695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.456 [2024-11-17 18:55:57.955130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.456 [2024-11-17 18:55:57.955157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.456 [2024-11-17 18:55:57.955187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.456 [2024-11-17 18:55:57.955428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.456 [2024-11-17 18:55:57.955625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.456 [2024-11-17 18:55:57.955644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.456 [2024-11-17 18:55:57.955656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.456 [2024-11-17 18:55:57.955694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.456 [2024-11-17 18:55:57.967795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.456 [2024-11-17 18:55:57.968161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.456 [2024-11-17 18:55:57.968204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.456 [2024-11-17 18:55:57.968219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.456 [2024-11-17 18:55:57.968472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.456 [2024-11-17 18:55:57.968709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.456 [2024-11-17 18:55:57.968745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.456 [2024-11-17 18:55:57.968759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.456 [2024-11-17 18:55:57.968772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.456 [2024-11-17 18:55:57.980867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.456 [2024-11-17 18:55:57.981182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.456 [2024-11-17 18:55:57.981223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.456 [2024-11-17 18:55:57.981239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.456 [2024-11-17 18:55:57.981454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.456 [2024-11-17 18:55:57.981662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.456 [2024-11-17 18:55:57.981690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.456 [2024-11-17 18:55:57.981703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.456 [2024-11-17 18:55:57.981714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.456 [2024-11-17 18:55:57.993834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.456 [2024-11-17 18:55:57.994177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.456 [2024-11-17 18:55:57.994204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.456 [2024-11-17 18:55:57.994220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.456 [2024-11-17 18:55:57.994443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.456 [2024-11-17 18:55:57.994650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.456 [2024-11-17 18:55:57.994668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.456 [2024-11-17 18:55:57.994712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.456 [2024-11-17 18:55:57.994726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.456 [2024-11-17 18:55:58.007129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.456 [2024-11-17 18:55:58.007533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.456 [2024-11-17 18:55:58.007577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.456 [2024-11-17 18:55:58.007592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.456 [2024-11-17 18:55:58.007866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.456 [2024-11-17 18:55:58.008109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.456 [2024-11-17 18:55:58.008132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.456 [2024-11-17 18:55:58.008146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.456 [2024-11-17 18:55:58.008159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.456 [2024-11-17 18:55:58.020273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.456 [2024-11-17 18:55:58.020690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.456 [2024-11-17 18:55:58.020733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.457 [2024-11-17 18:55:58.020750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.457 [2024-11-17 18:55:58.020986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.457 [2024-11-17 18:55:58.021178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.457 [2024-11-17 18:55:58.021196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.457 [2024-11-17 18:55:58.021209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.457 [2024-11-17 18:55:58.021221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.716 [2024-11-17 18:55:58.033340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.716 [2024-11-17 18:55:58.033736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.716 [2024-11-17 18:55:58.033766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.716 [2024-11-17 18:55:58.033782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.716 [2024-11-17 18:55:58.033997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.716 [2024-11-17 18:55:58.034242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.716 [2024-11-17 18:55:58.034260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.716 [2024-11-17 18:55:58.034272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.716 [2024-11-17 18:55:58.034284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.716 [2024-11-17 18:55:58.046421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.716 [2024-11-17 18:55:58.046901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.716 [2024-11-17 18:55:58.046943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.716 [2024-11-17 18:55:58.046959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.716 [2024-11-17 18:55:58.047202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.716 [2024-11-17 18:55:58.047394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.716 [2024-11-17 18:55:58.047412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.716 [2024-11-17 18:55:58.047424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.716 [2024-11-17 18:55:58.047436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.716 [2024-11-17 18:55:58.059432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.716 [2024-11-17 18:55:58.059866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.716 [2024-11-17 18:55:58.059895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.717 [2024-11-17 18:55:58.059911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.717 [2024-11-17 18:55:58.060153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.717 [2024-11-17 18:55:58.060361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.717 [2024-11-17 18:55:58.060379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.717 [2024-11-17 18:55:58.060392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.717 [2024-11-17 18:55:58.060403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.717 [2024-11-17 18:55:58.072585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.717 [2024-11-17 18:55:58.072946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.717 [2024-11-17 18:55:58.072976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.717 [2024-11-17 18:55:58.073007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.717 [2024-11-17 18:55:58.073243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.717 [2024-11-17 18:55:58.073436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.717 [2024-11-17 18:55:58.073454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.717 [2024-11-17 18:55:58.073466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.717 [2024-11-17 18:55:58.073478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.717 [2024-11-17 18:55:58.085709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.717 [2024-11-17 18:55:58.086074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.717 [2024-11-17 18:55:58.086116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.717 [2024-11-17 18:55:58.086139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.717 [2024-11-17 18:55:58.086408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.717 [2024-11-17 18:55:58.086600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.717 [2024-11-17 18:55:58.086618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.717 [2024-11-17 18:55:58.086631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.717 [2024-11-17 18:55:58.086643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.717 [2024-11-17 18:55:58.098969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.717 [2024-11-17 18:55:58.099461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.717 [2024-11-17 18:55:58.099503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.717 [2024-11-17 18:55:58.099520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.717 [2024-11-17 18:55:58.099799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.717 [2024-11-17 18:55:58.100003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.717 [2024-11-17 18:55:58.100023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.717 [2024-11-17 18:55:58.100036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.717 [2024-11-17 18:55:58.100048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.717 [2024-11-17 18:55:58.112143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.717 [2024-11-17 18:55:58.112484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.717 [2024-11-17 18:55:58.112550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.717 [2024-11-17 18:55:58.112566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.717 [2024-11-17 18:55:58.112791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.717 [2024-11-17 18:55:58.112989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.717 [2024-11-17 18:55:58.113022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.717 [2024-11-17 18:55:58.113034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.717 [2024-11-17 18:55:58.113046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.717 [2024-11-17 18:55:58.125420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.717 [2024-11-17 18:55:58.125796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.717 [2024-11-17 18:55:58.125825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.717 [2024-11-17 18:55:58.125841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.717 [2024-11-17 18:55:58.126084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.717 [2024-11-17 18:55:58.126288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.717 [2024-11-17 18:55:58.126307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.717 [2024-11-17 18:55:58.126320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.717 [2024-11-17 18:55:58.126331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.717 [2024-11-17 18:55:58.138783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.717 [2024-11-17 18:55:58.139288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.717 [2024-11-17 18:55:58.139338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.717 [2024-11-17 18:55:58.139354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.717 [2024-11-17 18:55:58.139620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.717 [2024-11-17 18:55:58.139862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.717 [2024-11-17 18:55:58.139884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.717 [2024-11-17 18:55:58.139898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.717 [2024-11-17 18:55:58.139911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.717 [2024-11-17 18:55:58.152165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.717 [2024-11-17 18:55:58.152506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.717 [2024-11-17 18:55:58.152548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.717 [2024-11-17 18:55:58.152580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.717 [2024-11-17 18:55:58.152835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.717 [2024-11-17 18:55:58.153073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.717 [2024-11-17 18:55:58.153093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.717 [2024-11-17 18:55:58.153106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.717 [2024-11-17 18:55:58.153118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.717 [2024-11-17 18:55:58.165490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.717 [2024-11-17 18:55:58.165887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.717 [2024-11-17 18:55:58.165916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.718 [2024-11-17 18:55:58.165933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.718 [2024-11-17 18:55:58.166175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.718 [2024-11-17 18:55:58.166388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.718 [2024-11-17 18:55:58.166407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.718 [2024-11-17 18:55:58.166425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.718 [2024-11-17 18:55:58.166438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.718 [2024-11-17 18:55:58.178799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.718 [2024-11-17 18:55:58.179253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.718 [2024-11-17 18:55:58.179281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.718 [2024-11-17 18:55:58.179298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.718 [2024-11-17 18:55:58.179538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.718 [2024-11-17 18:55:58.179767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.718 [2024-11-17 18:55:58.179787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.718 [2024-11-17 18:55:58.179800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.718 [2024-11-17 18:55:58.179813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.718 [2024-11-17 18:55:58.192136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.718 [2024-11-17 18:55:58.192445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.718 [2024-11-17 18:55:58.192487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.718 [2024-11-17 18:55:58.192503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.718 [2024-11-17 18:55:58.192756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.718 [2024-11-17 18:55:58.192977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.718 [2024-11-17 18:55:58.193011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.718 [2024-11-17 18:55:58.193023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.718 [2024-11-17 18:55:58.193035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.718 [2024-11-17 18:55:58.205371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.718 [2024-11-17 18:55:58.205756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.718 [2024-11-17 18:55:58.205784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.718 [2024-11-17 18:55:58.205801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.718 [2024-11-17 18:55:58.206029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.718 [2024-11-17 18:55:58.206244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.718 [2024-11-17 18:55:58.206264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.718 [2024-11-17 18:55:58.206277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.718 [2024-11-17 18:55:58.206288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.718 [2024-11-17 18:55:58.218604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.718 [2024-11-17 18:55:58.218993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.718 [2024-11-17 18:55:58.219021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.718 [2024-11-17 18:55:58.219038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.718 [2024-11-17 18:55:58.219281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.718 [2024-11-17 18:55:58.219480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.718 [2024-11-17 18:55:58.219498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.718 [2024-11-17 18:55:58.219511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.718 [2024-11-17 18:55:58.219523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.718 [2024-11-17 18:55:58.231854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.718 [2024-11-17 18:55:58.232200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.718 [2024-11-17 18:55:58.232228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.718 [2024-11-17 18:55:58.232244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.718 [2024-11-17 18:55:58.232469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.718 [2024-11-17 18:55:58.232712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.718 [2024-11-17 18:55:58.232732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.718 [2024-11-17 18:55:58.232745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.718 [2024-11-17 18:55:58.232757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.718 [2024-11-17 18:55:58.245067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.718 [2024-11-17 18:55:58.245436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.718 [2024-11-17 18:55:58.245464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.718 [2024-11-17 18:55:58.245479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.718 [2024-11-17 18:55:58.245710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.718 [2024-11-17 18:55:58.245915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.718 [2024-11-17 18:55:58.245934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.718 [2024-11-17 18:55:58.245947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.718 [2024-11-17 18:55:58.245974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.718 [2024-11-17 18:55:58.258323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.718 [2024-11-17 18:55:58.258770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.718 [2024-11-17 18:55:58.258803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.718 [2024-11-17 18:55:58.258826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.718 [2024-11-17 18:55:58.259069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.718 [2024-11-17 18:55:58.259342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.718 [2024-11-17 18:55:58.259365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.718 [2024-11-17 18:55:58.259380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.718 [2024-11-17 18:55:58.259393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.718 [2024-11-17 18:55:58.271590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.718 [2024-11-17 18:55:58.272029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.718 [2024-11-17 18:55:58.272058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.718 [2024-11-17 18:55:58.272074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.718 [2024-11-17 18:55:58.272296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.718 [2024-11-17 18:55:58.272511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.718 [2024-11-17 18:55:58.272529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.718 [2024-11-17 18:55:58.272542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.719 [2024-11-17 18:55:58.272554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.719 [2024-11-17 18:55:58.284892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.719 [2024-11-17 18:55:58.285255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.719 [2024-11-17 18:55:58.285284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.719 [2024-11-17 18:55:58.285301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.719 [2024-11-17 18:55:58.285531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.719 [2024-11-17 18:55:58.285776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.719 [2024-11-17 18:55:58.285796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.719 [2024-11-17 18:55:58.285809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.719 [2024-11-17 18:55:58.285822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.978 [2024-11-17 18:55:58.298582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.978 [2024-11-17 18:55:58.299053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.978 [2024-11-17 18:55:58.299095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.978 [2024-11-17 18:55:58.299112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.978 [2024-11-17 18:55:58.299354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.978 [2024-11-17 18:55:58.299573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.978 [2024-11-17 18:55:58.299592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.978 [2024-11-17 18:55:58.299605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.978 [2024-11-17 18:55:58.299616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.978 [2024-11-17 18:55:58.311818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.978 [2024-11-17 18:55:58.312177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.978 [2024-11-17 18:55:58.312205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.978 [2024-11-17 18:55:58.312221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.978 [2024-11-17 18:55:58.312449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.978 [2024-11-17 18:55:58.312664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.978 [2024-11-17 18:55:58.312708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.978 [2024-11-17 18:55:58.312722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.978 [2024-11-17 18:55:58.312735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.978 [2024-11-17 18:55:58.325097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.978 [2024-11-17 18:55:58.325472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.978 [2024-11-17 18:55:58.325517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.978 [2024-11-17 18:55:58.325533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.978 [2024-11-17 18:55:58.325800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.978 [2024-11-17 18:55:58.326018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.978 [2024-11-17 18:55:58.326038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.978 [2024-11-17 18:55:58.326051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.978 [2024-11-17 18:55:58.326062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.978 5758.75 IOPS, 22.50 MiB/s [2024-11-17T17:55:58.554Z] [2024-11-17 18:55:58.338425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.978 [2024-11-17 18:55:58.338823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.978 [2024-11-17 18:55:58.338853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.978 [2024-11-17 18:55:58.338869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.978 [2024-11-17 18:55:58.339101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.978 [2024-11-17 18:55:58.339316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.978 [2024-11-17 18:55:58.339335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.978 [2024-11-17 18:55:58.339353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.979 [2024-11-17 18:55:58.339365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.979 [2024-11-17 18:55:58.351691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.979 [2024-11-17 18:55:58.352130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.979 [2024-11-17 18:55:58.352158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.979 [2024-11-17 18:55:58.352175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.979 [2024-11-17 18:55:58.352416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.979 [2024-11-17 18:55:58.352614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.979 [2024-11-17 18:55:58.352633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.979 [2024-11-17 18:55:58.352646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.979 [2024-11-17 18:55:58.352683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.979 [2024-11-17 18:55:58.365005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.979 [2024-11-17 18:55:58.365334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.979 [2024-11-17 18:55:58.365362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.979 [2024-11-17 18:55:58.365378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.979 [2024-11-17 18:55:58.365603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.979 [2024-11-17 18:55:58.365846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.979 [2024-11-17 18:55:58.365866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.979 [2024-11-17 18:55:58.365879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.979 [2024-11-17 18:55:58.365891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.979 [2024-11-17 18:55:58.378197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.979 [2024-11-17 18:55:58.378569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.979 [2024-11-17 18:55:58.378612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.979 [2024-11-17 18:55:58.378628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.979 [2024-11-17 18:55:58.378878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.979 [2024-11-17 18:55:58.379096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.979 [2024-11-17 18:55:58.379115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.979 [2024-11-17 18:55:58.379127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.979 [2024-11-17 18:55:58.379139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.979 [2024-11-17 18:55:58.391434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.979 [2024-11-17 18:55:58.391791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.979 [2024-11-17 18:55:58.391819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.979 [2024-11-17 18:55:58.391835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.979 [2024-11-17 18:55:58.392064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.979 [2024-11-17 18:55:58.392278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.979 [2024-11-17 18:55:58.392298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.979 [2024-11-17 18:55:58.392310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.979 [2024-11-17 18:55:58.392322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.979 [2024-11-17 18:55:58.404632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.979 [2024-11-17 18:55:58.404990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.979 [2024-11-17 18:55:58.405018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.979 [2024-11-17 18:55:58.405034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.979 [2024-11-17 18:55:58.405263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.979 [2024-11-17 18:55:58.405477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.979 [2024-11-17 18:55:58.405497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.979 [2024-11-17 18:55:58.405509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.979 [2024-11-17 18:55:58.405522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.979 [2024-11-17 18:55:58.417858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.979 [2024-11-17 18:55:58.418251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.979 [2024-11-17 18:55:58.418280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.979 [2024-11-17 18:55:58.418296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.979 [2024-11-17 18:55:58.418539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.979 [2024-11-17 18:55:58.418763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.979 [2024-11-17 18:55:58.418784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.979 [2024-11-17 18:55:58.418798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.979 [2024-11-17 18:55:58.418811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.979 [2024-11-17 18:55:58.431100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.979 [2024-11-17 18:55:58.431476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.979 [2024-11-17 18:55:58.431504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.979 [2024-11-17 18:55:58.431537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.979 [2024-11-17 18:55:58.431791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.979 [2024-11-17 18:55:58.431996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.979 [2024-11-17 18:55:58.432029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.979 [2024-11-17 18:55:58.432041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.979 [2024-11-17 18:55:58.432054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.979 [2024-11-17 18:55:58.444460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.979 [2024-11-17 18:55:58.444798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.979 [2024-11-17 18:55:58.444842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.979 [2024-11-17 18:55:58.444859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.979 [2024-11-17 18:55:58.445099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.979 [2024-11-17 18:55:58.445313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.979 [2024-11-17 18:55:58.445332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.979 [2024-11-17 18:55:58.445345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.979 [2024-11-17 18:55:58.445357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.980 [2024-11-17 18:55:58.457765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.980 [2024-11-17 18:55:58.458226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.980 [2024-11-17 18:55:58.458254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.980 [2024-11-17 18:55:58.458271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.980 [2024-11-17 18:55:58.458513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.980 [2024-11-17 18:55:58.458738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.980 [2024-11-17 18:55:58.458759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.980 [2024-11-17 18:55:58.458772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.980 [2024-11-17 18:55:58.458784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.980 [2024-11-17 18:55:58.471108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.980 [2024-11-17 18:55:58.471491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.980 [2024-11-17 18:55:58.471519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.980 [2024-11-17 18:55:58.471536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.980 [2024-11-17 18:55:58.471775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.980 [2024-11-17 18:55:58.472003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.980 [2024-11-17 18:55:58.472038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.980 [2024-11-17 18:55:58.472051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.980 [2024-11-17 18:55:58.472063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.980 [2024-11-17 18:55:58.484367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.980 [2024-11-17 18:55:58.484822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.980 [2024-11-17 18:55:58.484850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.980 [2024-11-17 18:55:58.484866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.980 [2024-11-17 18:55:58.485096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.980 [2024-11-17 18:55:58.485309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.980 [2024-11-17 18:55:58.485327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.980 [2024-11-17 18:55:58.485340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.980 [2024-11-17 18:55:58.485352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.980 [2024-11-17 18:55:58.497801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.980 [2024-11-17 18:55:58.498261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.980 [2024-11-17 18:55:58.498290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.980 [2024-11-17 18:55:58.498307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.980 [2024-11-17 18:55:58.498549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.980 [2024-11-17 18:55:58.498776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.980 [2024-11-17 18:55:58.498797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.980 [2024-11-17 18:55:58.498811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.980 [2024-11-17 18:55:58.498823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.980 [2024-11-17 18:55:58.511149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.980 [2024-11-17 18:55:58.511504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.980 [2024-11-17 18:55:58.511533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.980 [2024-11-17 18:55:58.511549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.980 [2024-11-17 18:55:58.511810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.980 [2024-11-17 18:55:58.512063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.980 [2024-11-17 18:55:58.512086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.980 [2024-11-17 18:55:58.512106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.980 [2024-11-17 18:55:58.512120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.980 [2024-11-17 18:55:58.524468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.980 [2024-11-17 18:55:58.524909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.980 [2024-11-17 18:55:58.524939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.980 [2024-11-17 18:55:58.524956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.980 [2024-11-17 18:55:58.525185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.980 [2024-11-17 18:55:58.525399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.980 [2024-11-17 18:55:58.525418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.980 [2024-11-17 18:55:58.525431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.980 [2024-11-17 18:55:58.525443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.980 [2024-11-17 18:55:58.537822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.980 [2024-11-17 18:55:58.538293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.980 [2024-11-17 18:55:58.538322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.980 [2024-11-17 18:55:58.538338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.980 [2024-11-17 18:55:58.538581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.980 [2024-11-17 18:55:58.538809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.980 [2024-11-17 18:55:58.538829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.980 [2024-11-17 18:55:58.538844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.980 [2024-11-17 18:55:58.538857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:11.980 [2024-11-17 18:55:58.551443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:11.980 [2024-11-17 18:55:58.551834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.980 [2024-11-17 18:55:58.551862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:11.980 [2024-11-17 18:55:58.551879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:11.980 [2024-11-17 18:55:58.552108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:11.980 [2024-11-17 18:55:58.552323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:11.980 [2024-11-17 18:55:58.552356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:11.980 [2024-11-17 18:55:58.552369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:11.980 [2024-11-17 18:55:58.552382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.240 [2024-11-17 18:55:58.564774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.240 [2024-11-17 18:55:58.565232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.240 [2024-11-17 18:55:58.565260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.240 [2024-11-17 18:55:58.565276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.240 [2024-11-17 18:55:58.565519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.240 [2024-11-17 18:55:58.565746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.240 [2024-11-17 18:55:58.565766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.240 [2024-11-17 18:55:58.565779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.240 [2024-11-17 18:55:58.565792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.240 [2024-11-17 18:55:58.578121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.240 [2024-11-17 18:55:58.578433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.240 [2024-11-17 18:55:58.578474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.240 [2024-11-17 18:55:58.578490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.240 [2024-11-17 18:55:58.578746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.240 [2024-11-17 18:55:58.578982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.240 [2024-11-17 18:55:58.579001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.240 [2024-11-17 18:55:58.579014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.240 [2024-11-17 18:55:58.579026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.240 [2024-11-17 18:55:58.591324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.240 [2024-11-17 18:55:58.591697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.240 [2024-11-17 18:55:58.591726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.240 [2024-11-17 18:55:58.591742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.240 [2024-11-17 18:55:58.591984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.240 [2024-11-17 18:55:58.592182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.240 [2024-11-17 18:55:58.592201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.240 [2024-11-17 18:55:58.592214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.240 [2024-11-17 18:55:58.592226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.240 [2024-11-17 18:55:58.604535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.240 [2024-11-17 18:55:58.604920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.240 [2024-11-17 18:55:58.604964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.240 [2024-11-17 18:55:58.604985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.240 [2024-11-17 18:55:58.605241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.240 [2024-11-17 18:55:58.605439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.240 [2024-11-17 18:55:58.605457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.240 [2024-11-17 18:55:58.605470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.240 [2024-11-17 18:55:58.605482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.240 [2024-11-17 18:55:58.617813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.240 [2024-11-17 18:55:58.618222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.240 [2024-11-17 18:55:58.618251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.240 [2024-11-17 18:55:58.618267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.240 [2024-11-17 18:55:58.618508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.240 [2024-11-17 18:55:58.618734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.240 [2024-11-17 18:55:58.618754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.240 [2024-11-17 18:55:58.618767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.240 [2024-11-17 18:55:58.618779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.240 [2024-11-17 18:55:58.631089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.240 [2024-11-17 18:55:58.631587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.240 [2024-11-17 18:55:58.631630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.240 [2024-11-17 18:55:58.631647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.240 [2024-11-17 18:55:58.631898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.240 [2024-11-17 18:55:58.632113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.240 [2024-11-17 18:55:58.632132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.240 [2024-11-17 18:55:58.632145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.240 [2024-11-17 18:55:58.632157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.240 [2024-11-17 18:55:58.644328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.240 [2024-11-17 18:55:58.644699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.240 [2024-11-17 18:55:58.644742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.240 [2024-11-17 18:55:58.644758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.240 [2024-11-17 18:55:58.645012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.240 [2024-11-17 18:55:58.645215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.240 [2024-11-17 18:55:58.645234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.240 [2024-11-17 18:55:58.645247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.240 [2024-11-17 18:55:58.645259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.240 [2024-11-17 18:55:58.657552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.240 [2024-11-17 18:55:58.657912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.240 [2024-11-17 18:55:58.657941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.240 [2024-11-17 18:55:58.657957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.240 [2024-11-17 18:55:58.658187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.240 [2024-11-17 18:55:58.658402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.240 [2024-11-17 18:55:58.658421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.240 [2024-11-17 18:55:58.658434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.240 [2024-11-17 18:55:58.658446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.240 [2024-11-17 18:55:58.670723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.240 [2024-11-17 18:55:58.671116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.240 [2024-11-17 18:55:58.671159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.240 [2024-11-17 18:55:58.671176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.240 [2024-11-17 18:55:58.671423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.241 [2024-11-17 18:55:58.671622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.241 [2024-11-17 18:55:58.671641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.241 [2024-11-17 18:55:58.671653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.241 [2024-11-17 18:55:58.671665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.241 [2024-11-17 18:55:58.683971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.241 [2024-11-17 18:55:58.684424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.241 [2024-11-17 18:55:58.684466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.241 [2024-11-17 18:55:58.684482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.241 [2024-11-17 18:55:58.684751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.241 [2024-11-17 18:55:58.684956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.241 [2024-11-17 18:55:58.684976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.241 [2024-11-17 18:55:58.685008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.241 [2024-11-17 18:55:58.685021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.241 [2024-11-17 18:55:58.697290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.241 [2024-11-17 18:55:58.697684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.241 [2024-11-17 18:55:58.697713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.241 [2024-11-17 18:55:58.697729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.241 [2024-11-17 18:55:58.697972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.241 [2024-11-17 18:55:58.698187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.241 [2024-11-17 18:55:58.698206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.241 [2024-11-17 18:55:58.698219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.241 [2024-11-17 18:55:58.698231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.241 [2024-11-17 18:55:58.710540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.241 [2024-11-17 18:55:58.710942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.241 [2024-11-17 18:55:58.710971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.241 [2024-11-17 18:55:58.710987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.241 [2024-11-17 18:55:58.711228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.241 [2024-11-17 18:55:58.711426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.241 [2024-11-17 18:55:58.711445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.241 [2024-11-17 18:55:58.711458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.241 [2024-11-17 18:55:58.711470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.241 [2024-11-17 18:55:58.723797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.241 [2024-11-17 18:55:58.724193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.241 [2024-11-17 18:55:58.724221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.241 [2024-11-17 18:55:58.724238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.241 [2024-11-17 18:55:58.724480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.241 [2024-11-17 18:55:58.724720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.241 [2024-11-17 18:55:58.724740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.241 [2024-11-17 18:55:58.724754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.241 [2024-11-17 18:55:58.724766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.241 [2024-11-17 18:55:58.737023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.241 [2024-11-17 18:55:58.737395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.241 [2024-11-17 18:55:58.737439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.241 [2024-11-17 18:55:58.737456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.241 [2024-11-17 18:55:58.737706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.241 [2024-11-17 18:55:58.737918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.241 [2024-11-17 18:55:58.737938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.241 [2024-11-17 18:55:58.737952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.241 [2024-11-17 18:55:58.737980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.241 [2024-11-17 18:55:58.750250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.241 [2024-11-17 18:55:58.750561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.241 [2024-11-17 18:55:58.750588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.241 [2024-11-17 18:55:58.750603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.241 [2024-11-17 18:55:58.750848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.241 [2024-11-17 18:55:58.751068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.241 [2024-11-17 18:55:58.751087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.241 [2024-11-17 18:55:58.751099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.241 [2024-11-17 18:55:58.751112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.241 [2024-11-17 18:55:58.763577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.241 [2024-11-17 18:55:58.763968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.241 [2024-11-17 18:55:58.763999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.241 [2024-11-17 18:55:58.764015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.241 [2024-11-17 18:55:58.764255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.241 [2024-11-17 18:55:58.764515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.241 [2024-11-17 18:55:58.764543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.241 [2024-11-17 18:55:58.764559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.241 [2024-11-17 18:55:58.764573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.241 [2024-11-17 18:55:58.776925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.241 [2024-11-17 18:55:58.777324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.241 [2024-11-17 18:55:58.777352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.241 [2024-11-17 18:55:58.777375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.241 [2024-11-17 18:55:58.777618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.241 [2024-11-17 18:55:58.777828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.241 [2024-11-17 18:55:58.777848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.241 [2024-11-17 18:55:58.777861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.242 [2024-11-17 18:55:58.777873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.242 [2024-11-17 18:55:58.790192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.242 [2024-11-17 18:55:58.790567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.242 [2024-11-17 18:55:58.790595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.242 [2024-11-17 18:55:58.790611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.242 [2024-11-17 18:55:58.790865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.242 [2024-11-17 18:55:58.791103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.242 [2024-11-17 18:55:58.791122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.242 [2024-11-17 18:55:58.791134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.242 [2024-11-17 18:55:58.791146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.242 [2024-11-17 18:55:58.803425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.242 [2024-11-17 18:55:58.803801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.242 [2024-11-17 18:55:58.803843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.242 [2024-11-17 18:55:58.803859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.242 [2024-11-17 18:55:58.804095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.242 [2024-11-17 18:55:58.804309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.242 [2024-11-17 18:55:58.804328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.242 [2024-11-17 18:55:58.804341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.242 [2024-11-17 18:55:58.804353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.501 [2024-11-17 18:55:58.816924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.501 [2024-11-17 18:55:58.817317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.501 [2024-11-17 18:55:58.817345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.501 [2024-11-17 18:55:58.817362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.501 [2024-11-17 18:55:58.817605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.501 [2024-11-17 18:55:58.817831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.501 [2024-11-17 18:55:58.817862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.501 [2024-11-17 18:55:58.817875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.501 [2024-11-17 18:55:58.817888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.501 [2024-11-17 18:55:58.830126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.501 [2024-11-17 18:55:58.830518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.501 [2024-11-17 18:55:58.830547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.501 [2024-11-17 18:55:58.830578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.501 [2024-11-17 18:55:58.830819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.501 [2024-11-17 18:55:58.831053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.501 [2024-11-17 18:55:58.831072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.501 [2024-11-17 18:55:58.831085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.501 [2024-11-17 18:55:58.831097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.501 [2024-11-17 18:55:58.843405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.502 [2024-11-17 18:55:58.843787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.502 [2024-11-17 18:55:58.843816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.502 [2024-11-17 18:55:58.843832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.502 [2024-11-17 18:55:58.844061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.502 [2024-11-17 18:55:58.844275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.502 [2024-11-17 18:55:58.844294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.502 [2024-11-17 18:55:58.844307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.502 [2024-11-17 18:55:58.844319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.502 [2024-11-17 18:55:58.856624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.502 [2024-11-17 18:55:58.856968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.502 [2024-11-17 18:55:58.856996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.502 [2024-11-17 18:55:58.857013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.502 [2024-11-17 18:55:58.857234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.502 [2024-11-17 18:55:58.857450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.502 [2024-11-17 18:55:58.857469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.502 [2024-11-17 18:55:58.857487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.502 [2024-11-17 18:55:58.857500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.502 [2024-11-17 18:55:58.869789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.502 [2024-11-17 18:55:58.870169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.502 [2024-11-17 18:55:58.870198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.502 [2024-11-17 18:55:58.870215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.502 [2024-11-17 18:55:58.870444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.502 [2024-11-17 18:55:58.870665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.502 [2024-11-17 18:55:58.870694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.502 [2024-11-17 18:55:58.870708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.502 [2024-11-17 18:55:58.870721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.502 [2024-11-17 18:55:58.883040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.502 [2024-11-17 18:55:58.883412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.502 [2024-11-17 18:55:58.883440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.502 [2024-11-17 18:55:58.883456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.502 [2024-11-17 18:55:58.883704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.502 [2024-11-17 18:55:58.883928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.502 [2024-11-17 18:55:58.883948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.502 [2024-11-17 18:55:58.883960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.502 [2024-11-17 18:55:58.883973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.502 [2024-11-17 18:55:58.896352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.502 [2024-11-17 18:55:58.896664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.502 [2024-11-17 18:55:58.896714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.502 [2024-11-17 18:55:58.896732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.502 [2024-11-17 18:55:58.896961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.502 [2024-11-17 18:55:58.897177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.502 [2024-11-17 18:55:58.897196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.502 [2024-11-17 18:55:58.897209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.502 [2024-11-17 18:55:58.897221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.502 [2024-11-17 18:55:58.909530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.502 [2024-11-17 18:55:58.909992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.502 [2024-11-17 18:55:58.910019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.502 [2024-11-17 18:55:58.910049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.502 [2024-11-17 18:55:58.910293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.502 [2024-11-17 18:55:58.910506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.502 [2024-11-17 18:55:58.910525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.502 [2024-11-17 18:55:58.910538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.502 [2024-11-17 18:55:58.910550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.502 [2024-11-17 18:55:58.922858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.502 [2024-11-17 18:55:58.923322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.502 [2024-11-17 18:55:58.923351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.502 [2024-11-17 18:55:58.923367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.502 [2024-11-17 18:55:58.923609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.502 [2024-11-17 18:55:58.923837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.502 [2024-11-17 18:55:58.923858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.502 [2024-11-17 18:55:58.923871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.502 [2024-11-17 18:55:58.923884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.502 [2024-11-17 18:55:58.936211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.502 [2024-11-17 18:55:58.936649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.502 [2024-11-17 18:55:58.936684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.502 [2024-11-17 18:55:58.936703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.502 [2024-11-17 18:55:58.936933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.502 [2024-11-17 18:55:58.937150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.502 [2024-11-17 18:55:58.937169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.502 [2024-11-17 18:55:58.937181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.502 [2024-11-17 18:55:58.937193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.502 [2024-11-17 18:55:58.949487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.502 [2024-11-17 18:55:58.949908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.502 [2024-11-17 18:55:58.949936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.502 [2024-11-17 18:55:58.949958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.502 [2024-11-17 18:55:58.950188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.502 [2024-11-17 18:55:58.950402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.502 [2024-11-17 18:55:58.950421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.502 [2024-11-17 18:55:58.950434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.502 [2024-11-17 18:55:58.950446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.502 [2024-11-17 18:55:58.962681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.502 [2024-11-17 18:55:58.963018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.502 [2024-11-17 18:55:58.963045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.502 [2024-11-17 18:55:58.963061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.502 [2024-11-17 18:55:58.963282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.502 [2024-11-17 18:55:58.963497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.503 [2024-11-17 18:55:58.963516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.503 [2024-11-17 18:55:58.963529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.503 [2024-11-17 18:55:58.963541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.503 [2024-11-17 18:55:58.975862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.503 [2024-11-17 18:55:58.976314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.503 [2024-11-17 18:55:58.976342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.503 [2024-11-17 18:55:58.976358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.503 [2024-11-17 18:55:58.976600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.503 [2024-11-17 18:55:58.976828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.503 [2024-11-17 18:55:58.976848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.503 [2024-11-17 18:55:58.976861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.503 [2024-11-17 18:55:58.976874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.503 [2024-11-17 18:55:58.989184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.503 [2024-11-17 18:55:58.989516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.503 [2024-11-17 18:55:58.989542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.503 [2024-11-17 18:55:58.989557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.503 [2024-11-17 18:55:58.989803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.503 [2024-11-17 18:55:58.990027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.503 [2024-11-17 18:55:58.990047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.503 [2024-11-17 18:55:58.990059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.503 [2024-11-17 18:55:58.990071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.503 [2024-11-17 18:55:59.002361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.503 [2024-11-17 18:55:59.002761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.503 [2024-11-17 18:55:59.002790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.503 [2024-11-17 18:55:59.002806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.503 [2024-11-17 18:55:59.003039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.503 [2024-11-17 18:55:59.003255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.503 [2024-11-17 18:55:59.003273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.503 [2024-11-17 18:55:59.003286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.503 [2024-11-17 18:55:59.003298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.503 [2024-11-17 18:55:59.015672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.503 [2024-11-17 18:55:59.016117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.503 [2024-11-17 18:55:59.016147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.503 [2024-11-17 18:55:59.016163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.503 [2024-11-17 18:55:59.016395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.503 [2024-11-17 18:55:59.016628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.503 [2024-11-17 18:55:59.016651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.503 [2024-11-17 18:55:59.016665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.503 [2024-11-17 18:55:59.016691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.503 [2024-11-17 18:55:59.029155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.503 [2024-11-17 18:55:59.029570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.503 [2024-11-17 18:55:59.029614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.503 [2024-11-17 18:55:59.029631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.503 [2024-11-17 18:55:59.029876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.503 [2024-11-17 18:55:59.030112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.503 [2024-11-17 18:55:59.030131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.503 [2024-11-17 18:55:59.030149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.503 [2024-11-17 18:55:59.030162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.503 [2024-11-17 18:55:59.042338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.503 [2024-11-17 18:55:59.042718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.503 [2024-11-17 18:55:59.042748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.503 [2024-11-17 18:55:59.042765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.503 [2024-11-17 18:55:59.042994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.503 [2024-11-17 18:55:59.043209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.503 [2024-11-17 18:55:59.043229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.503 [2024-11-17 18:55:59.043241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.503 [2024-11-17 18:55:59.043254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.503 [2024-11-17 18:55:59.055554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.503 [2024-11-17 18:55:59.056016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.503 [2024-11-17 18:55:59.056058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.503 [2024-11-17 18:55:59.056075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.503 [2024-11-17 18:55:59.056317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.503 [2024-11-17 18:55:59.056532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.503 [2024-11-17 18:55:59.056550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.503 [2024-11-17 18:55:59.056563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.503 [2024-11-17 18:55:59.056575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.503 [2024-11-17 18:55:59.068897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.503 [2024-11-17 18:55:59.069288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.503 [2024-11-17 18:55:59.069331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.503 [2024-11-17 18:55:59.069347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.503 [2024-11-17 18:55:59.069600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.503 [2024-11-17 18:55:59.069841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.503 [2024-11-17 18:55:59.069862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.503 [2024-11-17 18:55:59.069876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.503 [2024-11-17 18:55:59.069889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.761 [2024-11-17 18:55:59.082507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.761 [2024-11-17 18:55:59.082846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.761 [2024-11-17 18:55:59.082890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.761 [2024-11-17 18:55:59.082906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.761 [2024-11-17 18:55:59.083136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.761 [2024-11-17 18:55:59.083350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.761 [2024-11-17 18:55:59.083369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.761 [2024-11-17 18:55:59.083382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.761 [2024-11-17 18:55:59.083394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.761 [2024-11-17 18:55:59.095641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.761 [2024-11-17 18:55:59.096082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.761 [2024-11-17 18:55:59.096110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.761 [2024-11-17 18:55:59.096126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.761 [2024-11-17 18:55:59.096354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.761 [2024-11-17 18:55:59.096568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.761 [2024-11-17 18:55:59.096587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.761 [2024-11-17 18:55:59.096600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.761 [2024-11-17 18:55:59.096612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.761 [2024-11-17 18:55:59.108835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.761 [2024-11-17 18:55:59.109226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.109268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.109285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.109538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.109766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.109787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.109800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.109812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.122122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.122449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.122476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.122497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.122751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.122970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.122989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.123001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.123013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.135329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.135716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.135746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.135762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.136005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.136203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.136222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.136235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.136247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.148537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.148941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.148970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.148986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.149227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.149440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.149460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.149473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.149484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.161913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.162364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.162406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.162423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.162650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.162892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.162913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.162927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.162939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.175179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.175582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.175609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.175624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.175895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.176130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.176149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.176161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.176173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.188473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.188810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.188853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.188869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.189110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.189323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.189341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.189354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.189365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.201699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.202070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.202112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.202129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.202369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.202584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.202603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.202620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.202633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.214949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.215312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.215340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.215356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.215585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.215828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.215849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.215862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.215874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.228183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.228554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.228582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.228598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.228839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.229061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.229080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.229092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.229104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.241440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.241786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.241814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.241829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.242053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.242267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.242286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.242299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.242311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.254797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.255189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.255231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.255247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.255501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.255728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.255748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.255761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.255774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.268088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.268459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.268489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.268505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.268751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.268990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.762 [2024-11-17 18:55:59.269013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.762 [2024-11-17 18:55:59.269028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.762 [2024-11-17 18:55:59.269041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.762 [2024-11-17 18:55:59.281318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.762 [2024-11-17 18:55:59.281840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.762 [2024-11-17 18:55:59.281871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.762 [2024-11-17 18:55:59.281888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.762 [2024-11-17 18:55:59.282103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.762 [2024-11-17 18:55:59.282317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.763 [2024-11-17 18:55:59.282336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.763 [2024-11-17 18:55:59.282349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.763 [2024-11-17 18:55:59.282361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.763 [2024-11-17 18:55:59.294588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.763 [2024-11-17 18:55:59.294993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.763 [2024-11-17 18:55:59.295036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.763 [2024-11-17 18:55:59.295058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.763 [2024-11-17 18:55:59.295293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.763 [2024-11-17 18:55:59.295491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.763 [2024-11-17 18:55:59.295510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.763 [2024-11-17 18:55:59.295522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.763 [2024-11-17 18:55:59.295534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.763 [2024-11-17 18:55:59.307879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.763 [2024-11-17 18:55:59.308290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.763 [2024-11-17 18:55:59.308317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.763 [2024-11-17 18:55:59.308348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.763 [2024-11-17 18:55:59.308577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.763 [2024-11-17 18:55:59.308819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.763 [2024-11-17 18:55:59.308839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.763 [2024-11-17 18:55:59.308852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.763 [2024-11-17 18:55:59.308865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.763 [2024-11-17 18:55:59.321184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:12.763 [2024-11-17 18:55:59.321621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.763 [2024-11-17 18:55:59.321649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:12.763 [2024-11-17 18:55:59.321666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:12.763 [2024-11-17 18:55:59.321905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:12.763 [2024-11-17 18:55:59.322121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:12.763 [2024-11-17 18:55:59.322141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:12.763 [2024-11-17 18:55:59.322153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:12.763 [2024-11-17 18:55:59.322165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:12.763 4607.00 IOPS, 18.00 MiB/s [2024-11-17T17:55:59.339Z] [2024-11-17 18:55:59.336463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.022 [2024-11-17 18:55:59.336833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.022 [2024-11-17 18:55:59.336862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.022 [2024-11-17 18:55:59.336879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.022 [2024-11-17 18:55:59.337107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.022 [2024-11-17 18:55:59.337371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.022 [2024-11-17 18:55:59.337391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.022 [2024-11-17 18:55:59.337404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.022 [2024-11-17 18:55:59.337417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.022 [2024-11-17 18:55:59.349707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.022 [2024-11-17 18:55:59.350022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.022 [2024-11-17 18:55:59.350064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.022 [2024-11-17 18:55:59.350080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.022 [2024-11-17 18:55:59.350282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.022 [2024-11-17 18:55:59.350491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.022 [2024-11-17 18:55:59.350510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.022 [2024-11-17 18:55:59.350522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.022 [2024-11-17 18:55:59.350534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.022 [2024-11-17 18:55:59.362805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.022 [2024-11-17 18:55:59.363302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.022 [2024-11-17 18:55:59.363344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.022 [2024-11-17 18:55:59.363362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.022 [2024-11-17 18:55:59.363611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.022 [2024-11-17 18:55:59.363849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.022 [2024-11-17 18:55:59.363869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.022 [2024-11-17 18:55:59.363882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.022 [2024-11-17 18:55:59.363895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.022 [2024-11-17 18:55:59.376022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.022 [2024-11-17 18:55:59.376383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.022 [2024-11-17 18:55:59.376423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.022 [2024-11-17 18:55:59.376438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.022 [2024-11-17 18:55:59.376667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.022 [2024-11-17 18:55:59.376874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.022 [2024-11-17 18:55:59.376893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.022 [2024-11-17 18:55:59.376913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.022 [2024-11-17 18:55:59.376926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.022 [2024-11-17 18:55:59.388984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.022 [2024-11-17 18:55:59.389476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.022 [2024-11-17 18:55:59.389518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.022 [2024-11-17 18:55:59.389535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.022 [2024-11-17 18:55:59.389796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.022 [2024-11-17 18:55:59.389996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.022 [2024-11-17 18:55:59.390029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.022 [2024-11-17 18:55:59.390041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.022 [2024-11-17 18:55:59.390053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.022 [2024-11-17 18:55:59.402161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.022 [2024-11-17 18:55:59.402648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.022 [2024-11-17 18:55:59.402698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.022 [2024-11-17 18:55:59.402716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.022 [2024-11-17 18:55:59.402939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.022 [2024-11-17 18:55:59.403149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.022 [2024-11-17 18:55:59.403168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.022 [2024-11-17 18:55:59.403180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.022 [2024-11-17 18:55:59.403192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.022 [2024-11-17 18:55:59.415201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.022 [2024-11-17 18:55:59.415584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.022 [2024-11-17 18:55:59.415610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.022 [2024-11-17 18:55:59.415641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.022 [2024-11-17 18:55:59.415878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.022 [2024-11-17 18:55:59.416104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.022 [2024-11-17 18:55:59.416123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.022 [2024-11-17 18:55:59.416135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.022 [2024-11-17 18:55:59.416147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.022 [2024-11-17 18:55:59.428420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.022 [2024-11-17 18:55:59.428772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.022 [2024-11-17 18:55:59.428801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.022 [2024-11-17 18:55:59.428818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.022 [2024-11-17 18:55:59.429059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.022 [2024-11-17 18:55:59.429267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.022 [2024-11-17 18:55:59.429285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.022 [2024-11-17 18:55:59.429297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.022 [2024-11-17 18:55:59.429309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.022 [2024-11-17 18:55:59.441529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.022 [2024-11-17 18:55:59.441912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.023 [2024-11-17 18:55:59.441956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.023 [2024-11-17 18:55:59.441972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.023 [2024-11-17 18:55:59.442225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.023 [2024-11-17 18:55:59.442432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.023 [2024-11-17 18:55:59.442450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.023 [2024-11-17 18:55:59.442463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.023 [2024-11-17 18:55:59.442474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.023 [2024-11-17 18:55:59.454612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.023 [2024-11-17 18:55:59.455007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.023 [2024-11-17 18:55:59.455051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.023 [2024-11-17 18:55:59.455067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.023 [2024-11-17 18:55:59.455320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.023 [2024-11-17 18:55:59.455528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.023 [2024-11-17 18:55:59.455546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.023 [2024-11-17 18:55:59.455558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.023 [2024-11-17 18:55:59.455570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.023 [2024-11-17 18:55:59.467712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.023 [2024-11-17 18:55:59.468127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.023 [2024-11-17 18:55:59.468168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.023 [2024-11-17 18:55:59.468188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.023 [2024-11-17 18:55:59.468437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.023 [2024-11-17 18:55:59.468645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.023 [2024-11-17 18:55:59.468689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.023 [2024-11-17 18:55:59.468703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.023 [2024-11-17 18:55:59.468715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.023 [2024-11-17 18:55:59.480832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.023 [2024-11-17 18:55:59.481215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.023 [2024-11-17 18:55:59.481242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.023 [2024-11-17 18:55:59.481258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.023 [2024-11-17 18:55:59.481493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.023 [2024-11-17 18:55:59.481713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.023 [2024-11-17 18:55:59.481733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.023 [2024-11-17 18:55:59.481745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.023 [2024-11-17 18:55:59.481757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.023 [2024-11-17 18:55:59.493888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.023 [2024-11-17 18:55:59.494378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.023 [2024-11-17 18:55:59.494420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.023 [2024-11-17 18:55:59.494437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.023 [2024-11-17 18:55:59.494713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.023 [2024-11-17 18:55:59.494906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.023 [2024-11-17 18:55:59.494924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.023 [2024-11-17 18:55:59.494936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.023 [2024-11-17 18:55:59.494948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.023 [2024-11-17 18:55:59.506906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.023 [2024-11-17 18:55:59.507278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.023 [2024-11-17 18:55:59.507320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.023 [2024-11-17 18:55:59.507336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.023 [2024-11-17 18:55:59.507584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.023 [2024-11-17 18:55:59.507808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.023 [2024-11-17 18:55:59.507827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.023 [2024-11-17 18:55:59.507839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.023 [2024-11-17 18:55:59.507851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.023 [2024-11-17 18:55:59.520015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.023 [2024-11-17 18:55:59.520434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.023 [2024-11-17 18:55:59.520463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.023 [2024-11-17 18:55:59.520479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.023 [2024-11-17 18:55:59.520742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.023 [2024-11-17 18:55:59.520981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.023 [2024-11-17 18:55:59.521004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.023 [2024-11-17 18:55:59.521019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.023 [2024-11-17 18:55:59.521032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.023 [2024-11-17 18:55:59.533141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.023 [2024-11-17 18:55:59.533481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.023 [2024-11-17 18:55:59.533550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.023 [2024-11-17 18:55:59.533567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.023 [2024-11-17 18:55:59.533817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.023 [2024-11-17 18:55:59.534031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.023 [2024-11-17 18:55:59.534049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.023 [2024-11-17 18:55:59.534062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.023 [2024-11-17 18:55:59.534074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.023 [2024-11-17 18:55:59.546240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.023 [2024-11-17 18:55:59.546630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.023 [2024-11-17 18:55:59.546658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.023 [2024-11-17 18:55:59.546683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.023 [2024-11-17 18:55:59.546923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.023 [2024-11-17 18:55:59.547149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.023 [2024-11-17 18:55:59.547167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.023 [2024-11-17 18:55:59.547185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.023 [2024-11-17 18:55:59.547197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.023 [2024-11-17 18:55:59.559467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.023 [2024-11-17 18:55:59.559905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.023 [2024-11-17 18:55:59.559967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.023 [2024-11-17 18:55:59.559983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.023 [2024-11-17 18:55:59.560256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.023 [2024-11-17 18:55:59.560455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.023 [2024-11-17 18:55:59.560474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.023 [2024-11-17 18:55:59.560486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.023 [2024-11-17 18:55:59.560498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.023 [2024-11-17 18:55:59.572728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.024 [2024-11-17 18:55:59.573148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.024 [2024-11-17 18:55:59.573202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.024 [2024-11-17 18:55:59.573217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.024 [2024-11-17 18:55:59.573459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.024 [2024-11-17 18:55:59.573652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.024 [2024-11-17 18:55:59.573670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.024 [2024-11-17 18:55:59.573710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.024 [2024-11-17 18:55:59.573727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.024 [2024-11-17 18:55:59.585819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.024 [2024-11-17 18:55:59.586180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.024 [2024-11-17 18:55:59.586247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.024 [2024-11-17 18:55:59.586264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.024 [2024-11-17 18:55:59.586514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.024 [2024-11-17 18:55:59.586751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.024 [2024-11-17 18:55:59.586771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.024 [2024-11-17 18:55:59.586784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.024 [2024-11-17 18:55:59.586796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.283 [2024-11-17 18:55:59.599359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.283 [2024-11-17 18:55:59.599727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.283 [2024-11-17 18:55:59.599757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.283 [2024-11-17 18:55:59.599773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.283 [2024-11-17 18:55:59.600001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.283 [2024-11-17 18:55:59.600228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.283 [2024-11-17 18:55:59.600246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.283 [2024-11-17 18:55:59.600259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.283 [2024-11-17 18:55:59.600271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.283 [2024-11-17 18:55:59.612647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.283 [2024-11-17 18:55:59.613021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.283 [2024-11-17 18:55:59.613084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.284 [2024-11-17 18:55:59.613100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.284 [2024-11-17 18:55:59.613337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.284 [2024-11-17 18:55:59.613545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.284 [2024-11-17 18:55:59.613564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.284 [2024-11-17 18:55:59.613577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.284 [2024-11-17 18:55:59.613589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.284 [2024-11-17 18:55:59.625900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.284 [2024-11-17 18:55:59.626329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.284 [2024-11-17 18:55:59.626358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.284 [2024-11-17 18:55:59.626374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.284 [2024-11-17 18:55:59.626627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.284 [2024-11-17 18:55:59.626853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.284 [2024-11-17 18:55:59.626874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.284 [2024-11-17 18:55:59.626887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.284 [2024-11-17 18:55:59.626899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.284 [2024-11-17 18:55:59.639063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.284 [2024-11-17 18:55:59.639452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.284 [2024-11-17 18:55:59.639493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.284 [2024-11-17 18:55:59.639515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.284 [2024-11-17 18:55:59.639765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.284 [2024-11-17 18:55:59.639964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.284 [2024-11-17 18:55:59.639983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.284 [2024-11-17 18:55:59.639995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.284 [2024-11-17 18:55:59.640007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.284 [2024-11-17 18:55:59.652282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.284 [2024-11-17 18:55:59.652612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.284 [2024-11-17 18:55:59.652640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.284 [2024-11-17 18:55:59.652656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.284 [2024-11-17 18:55:59.652887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.284 [2024-11-17 18:55:59.653113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.284 [2024-11-17 18:55:59.653134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.284 [2024-11-17 18:55:59.653146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.284 [2024-11-17 18:55:59.653158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.284 [2024-11-17 18:55:59.665443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.284 [2024-11-17 18:55:59.665885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.284 [2024-11-17 18:55:59.665937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.284 [2024-11-17 18:55:59.665953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.284 [2024-11-17 18:55:59.666206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.284 [2024-11-17 18:55:59.666397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.284 [2024-11-17 18:55:59.666416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.284 [2024-11-17 18:55:59.666428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.284 [2024-11-17 18:55:59.666440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.284 [2024-11-17 18:55:59.678577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.284 [2024-11-17 18:55:59.678921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.284 [2024-11-17 18:55:59.678949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.284 [2024-11-17 18:55:59.678965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.284 [2024-11-17 18:55:59.679197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.284 [2024-11-17 18:55:59.679410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.284 [2024-11-17 18:55:59.679429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.284 [2024-11-17 18:55:59.679441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.284 [2024-11-17 18:55:59.679453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.284 [2024-11-17 18:55:59.691764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.284 [2024-11-17 18:55:59.692255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.284 [2024-11-17 18:55:59.692306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.284 [2024-11-17 18:55:59.692322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.284 [2024-11-17 18:55:59.692583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.284 [2024-11-17 18:55:59.692805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.284 [2024-11-17 18:55:59.692825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.284 [2024-11-17 18:55:59.692838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.284 [2024-11-17 18:55:59.692850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.284 [2024-11-17 18:55:59.704970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.284 [2024-11-17 18:55:59.705395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.284 [2024-11-17 18:55:59.705445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.284 [2024-11-17 18:55:59.705460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.284 [2024-11-17 18:55:59.705716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.284 [2024-11-17 18:55:59.705915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.284 [2024-11-17 18:55:59.705934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.284 [2024-11-17 18:55:59.705946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.284 [2024-11-17 18:55:59.705972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.284 [2024-11-17 18:55:59.718106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.284 [2024-11-17 18:55:59.718520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.284 [2024-11-17 18:55:59.718570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.284 [2024-11-17 18:55:59.718587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.284 [2024-11-17 18:55:59.718850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.284 [2024-11-17 18:55:59.719061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.284 [2024-11-17 18:55:59.719080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.284 [2024-11-17 18:55:59.719097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.284 [2024-11-17 18:55:59.719109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.284 [2024-11-17 18:55:59.731247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.284 [2024-11-17 18:55:59.731580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.284 [2024-11-17 18:55:59.731606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.284 [2024-11-17 18:55:59.731622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.284 [2024-11-17 18:55:59.731889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.284 [2024-11-17 18:55:59.732102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.284 [2024-11-17 18:55:59.732120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.284 [2024-11-17 18:55:59.732132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.284 [2024-11-17 18:55:59.732144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.285 [2024-11-17 18:55:59.744399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.285 [2024-11-17 18:55:59.744797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.285 [2024-11-17 18:55:59.744825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.285 [2024-11-17 18:55:59.744841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.285 [2024-11-17 18:55:59.745064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.285 [2024-11-17 18:55:59.745272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.285 [2024-11-17 18:55:59.745290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.285 [2024-11-17 18:55:59.745302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.285 [2024-11-17 18:55:59.745314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.285 [2024-11-17 18:55:59.757586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.285 [2024-11-17 18:55:59.758036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.285 [2024-11-17 18:55:59.758079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.285 [2024-11-17 18:55:59.758096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.285 [2024-11-17 18:55:59.758335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.285 [2024-11-17 18:55:59.758527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.285 [2024-11-17 18:55:59.758545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.285 [2024-11-17 18:55:59.758558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.285 [2024-11-17 18:55:59.758569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.285 [2024-11-17 18:55:59.770624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.285 [2024-11-17 18:55:59.771033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.285 [2024-11-17 18:55:59.771063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.285 [2024-11-17 18:55:59.771079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.285 [2024-11-17 18:55:59.771331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.285 [2024-11-17 18:55:59.771568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.285 [2024-11-17 18:55:59.771606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.285 [2024-11-17 18:55:59.771621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.285 [2024-11-17 18:55:59.771634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.285 [2024-11-17 18:55:59.783917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.285 [2024-11-17 18:55:59.784286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.285 [2024-11-17 18:55:59.784315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.285 [2024-11-17 18:55:59.784331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.285 [2024-11-17 18:55:59.784566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.285 [2024-11-17 18:55:59.784792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.285 [2024-11-17 18:55:59.784813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.285 [2024-11-17 18:55:59.784826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.285 [2024-11-17 18:55:59.784838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.285 [2024-11-17 18:55:59.797145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.285 [2024-11-17 18:55:59.797572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.285 [2024-11-17 18:55:59.797600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.285 [2024-11-17 18:55:59.797633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.285 [2024-11-17 18:55:59.797882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.285 [2024-11-17 18:55:59.798112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.285 [2024-11-17 18:55:59.798131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.285 [2024-11-17 18:55:59.798143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.285 [2024-11-17 18:55:59.798155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.285 [2024-11-17 18:55:59.810119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.285 [2024-11-17 18:55:59.810483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.285 [2024-11-17 18:55:59.810525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.285 [2024-11-17 18:55:59.810545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.285 [2024-11-17 18:55:59.810789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.285 [2024-11-17 18:55:59.811006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.285 [2024-11-17 18:55:59.811024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.285 [2024-11-17 18:55:59.811037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.285 [2024-11-17 18:55:59.811048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.285 [2024-11-17 18:55:59.823354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.285 [2024-11-17 18:55:59.823721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.285 [2024-11-17 18:55:59.823765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.285 [2024-11-17 18:55:59.823780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.285 [2024-11-17 18:55:59.824034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.285 [2024-11-17 18:55:59.824227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.285 [2024-11-17 18:55:59.824245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.285 [2024-11-17 18:55:59.824257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.285 [2024-11-17 18:55:59.824268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.285 [2024-11-17 18:55:59.836420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.285 [2024-11-17 18:55:59.836847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.285 [2024-11-17 18:55:59.836890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.285 [2024-11-17 18:55:59.836907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.285 [2024-11-17 18:55:59.837155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.285 [2024-11-17 18:55:59.837363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.285 [2024-11-17 18:55:59.837382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.285 [2024-11-17 18:55:59.837394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.285 [2024-11-17 18:55:59.837405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.285 [2024-11-17 18:55:59.849514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.285 [2024-11-17 18:55:59.849968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.285 [2024-11-17 18:55:59.850011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.285 [2024-11-17 18:55:59.850027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.285 [2024-11-17 18:55:59.850294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.285 [2024-11-17 18:55:59.850492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.285 [2024-11-17 18:55:59.850510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.285 [2024-11-17 18:55:59.850523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.285 [2024-11-17 18:55:59.850534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.545 [2024-11-17 18:55:59.862710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.545 [2024-11-17 18:55:59.863156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.545 [2024-11-17 18:55:59.863183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.545 [2024-11-17 18:55:59.863199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.545 [2024-11-17 18:55:59.863426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.545 [2024-11-17 18:55:59.863644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.545 [2024-11-17 18:55:59.863665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.545 [2024-11-17 18:55:59.863706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.545 [2024-11-17 18:55:59.863721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.545 [2024-11-17 18:55:59.875823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.545 [2024-11-17 18:55:59.876212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.545 [2024-11-17 18:55:59.876255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.545 [2024-11-17 18:55:59.876271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.545 [2024-11-17 18:55:59.876539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.545 [2024-11-17 18:55:59.876761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.545 [2024-11-17 18:55:59.876781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.545 [2024-11-17 18:55:59.876794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.545 [2024-11-17 18:55:59.876806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.545 [2024-11-17 18:55:59.889209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.545 [2024-11-17 18:55:59.889512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.545 [2024-11-17 18:55:59.889554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.545 [2024-11-17 18:55:59.889569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.545 [2024-11-17 18:55:59.889829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.545 [2024-11-17 18:55:59.890076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.545 [2024-11-17 18:55:59.890095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.545 [2024-11-17 18:55:59.890112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.545 [2024-11-17 18:55:59.890124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.545 [2024-11-17 18:55:59.902403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.545 [2024-11-17 18:55:59.902772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.545 [2024-11-17 18:55:59.902815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.545 [2024-11-17 18:55:59.902831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.545 [2024-11-17 18:55:59.903082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.545 [2024-11-17 18:55:59.903289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.545 [2024-11-17 18:55:59.903309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.545 [2024-11-17 18:55:59.903321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.545 [2024-11-17 18:55:59.903333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.545 [2024-11-17 18:55:59.915597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.545 [2024-11-17 18:55:59.915986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.545 [2024-11-17 18:55:59.916028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.545 [2024-11-17 18:55:59.916043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.545 [2024-11-17 18:55:59.916286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.545 [2024-11-17 18:55:59.916477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.545 [2024-11-17 18:55:59.916495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.545 [2024-11-17 18:55:59.916508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.545 [2024-11-17 18:55:59.916520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.545 [2024-11-17 18:55:59.928813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.545 [2024-11-17 18:55:59.929160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.545 [2024-11-17 18:55:59.929188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.545 [2024-11-17 18:55:59.929204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.545 [2024-11-17 18:55:59.929424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.546 [2024-11-17 18:55:59.929632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.546 [2024-11-17 18:55:59.929665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.546 [2024-11-17 18:55:59.929694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.546 [2024-11-17 18:55:59.929709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.546 [2024-11-17 18:55:59.941975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.546 [2024-11-17 18:55:59.942356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.546 [2024-11-17 18:55:59.942385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.546 [2024-11-17 18:55:59.942401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.546 [2024-11-17 18:55:59.942642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.546 [2024-11-17 18:55:59.942872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.546 [2024-11-17 18:55:59.942892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.546 [2024-11-17 18:55:59.942905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.546 [2024-11-17 18:55:59.942917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.546 [2024-11-17 18:55:59.955086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.546 [2024-11-17 18:55:59.955515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.546 [2024-11-17 18:55:59.955558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.546 [2024-11-17 18:55:59.955575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.546 [2024-11-17 18:55:59.955827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.546 [2024-11-17 18:55:59.956039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.546 [2024-11-17 18:55:59.956058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.546 [2024-11-17 18:55:59.956070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.546 [2024-11-17 18:55:59.956082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.546 [2024-11-17 18:55:59.968200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.546 [2024-11-17 18:55:59.968688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.546 [2024-11-17 18:55:59.968718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.546 [2024-11-17 18:55:59.968734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.546 [2024-11-17 18:55:59.968975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.546 [2024-11-17 18:55:59.969184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.546 [2024-11-17 18:55:59.969203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.546 [2024-11-17 18:55:59.969215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.546 [2024-11-17 18:55:59.969227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 889373 Killed "${NVMF_APP[@]}" "$@" 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:13.546 [2024-11-17 18:55:59.981376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.546 [2024-11-17 18:55:59.981800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.546 [2024-11-17 18:55:59.981831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.546 [2024-11-17 18:55:59.981847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:13.546 [2024-11-17 18:55:59.982077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.546 [2024-11-17 18:55:59.982308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.546 [2024-11-17 18:55:59.982328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.546 [2024-11-17 18:55:59.982341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.546 [2024-11-17 18:55:59.982354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=890323 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 890323 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 890323 ']' 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.546 18:55:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:13.546 [2024-11-17 18:55:59.994689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.546 [2024-11-17 18:55:59.995149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.546 [2024-11-17 18:55:59.995195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.546 [2024-11-17 18:55:59.995212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.546 [2024-11-17 18:55:59.995453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.546 [2024-11-17 18:55:59.995665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.546 [2024-11-17 18:55:59.995697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.546 [2024-11-17 18:55:59.995712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.546 [2024-11-17 18:55:59.995750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.546 [2024-11-17 18:56:00.008639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.546 [2024-11-17 18:56:00.009089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.546 [2024-11-17 18:56:00.009128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.546 [2024-11-17 18:56:00.009148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.546 [2024-11-17 18:56:00.009377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.546 [2024-11-17 18:56:00.009604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.546 [2024-11-17 18:56:00.009624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.546 [2024-11-17 18:56:00.009638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.546 [2024-11-17 18:56:00.009651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.546 [2024-11-17 18:56:00.022236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.546 [2024-11-17 18:56:00.022683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.546 [2024-11-17 18:56:00.022715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.546 [2024-11-17 18:56:00.022732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.546 [2024-11-17 18:56:00.022960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.546 [2024-11-17 18:56:00.023220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.546 [2024-11-17 18:56:00.023249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.546 [2024-11-17 18:56:00.023265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.546 [2024-11-17 18:56:00.023279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.546 [2024-11-17 18:56:00.035245] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:13.546 [2024-11-17 18:56:00.035317] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.546 [2024-11-17 18:56:00.035811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.546 [2024-11-17 18:56:00.036238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.546 [2024-11-17 18:56:00.036282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.546 [2024-11-17 18:56:00.036301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.546 [2024-11-17 18:56:00.036546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.547 [2024-11-17 18:56:00.036787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.547 [2024-11-17 18:56:00.036810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.547 [2024-11-17 18:56:00.036825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.547 [2024-11-17 18:56:00.036839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.547 [2024-11-17 18:56:00.049299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.547 [2024-11-17 18:56:00.049687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.547 [2024-11-17 18:56:00.049717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.547 [2024-11-17 18:56:00.049735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.547 [2024-11-17 18:56:00.049979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.547 [2024-11-17 18:56:00.050194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.547 [2024-11-17 18:56:00.050213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.547 [2024-11-17 18:56:00.050226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.547 [2024-11-17 18:56:00.050239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.547 [2024-11-17 18:56:00.062801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.547 [2024-11-17 18:56:00.063145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.547 [2024-11-17 18:56:00.063174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.547 [2024-11-17 18:56:00.063191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.547 [2024-11-17 18:56:00.063426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.547 [2024-11-17 18:56:00.063625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.547 [2024-11-17 18:56:00.063644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.547 [2024-11-17 18:56:00.063657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.547 [2024-11-17 18:56:00.063670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.547 [2024-11-17 18:56:00.076038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.547 [2024-11-17 18:56:00.076441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.547 [2024-11-17 18:56:00.076470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.547 [2024-11-17 18:56:00.076487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.547 [2024-11-17 18:56:00.076713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.547 [2024-11-17 18:56:00.076933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.547 [2024-11-17 18:56:00.076968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.547 [2024-11-17 18:56:00.076982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.547 [2024-11-17 18:56:00.076995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.547 [2024-11-17 18:56:00.089503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.547 [2024-11-17 18:56:00.089852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.547 [2024-11-17 18:56:00.089882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.547 [2024-11-17 18:56:00.089899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.547 [2024-11-17 18:56:00.090147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.547 [2024-11-17 18:56:00.090399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.547 [2024-11-17 18:56:00.090420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.547 [2024-11-17 18:56:00.090433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.547 [2024-11-17 18:56:00.090446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.547 [2024-11-17 18:56:00.103059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.547 [2024-11-17 18:56:00.103430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.547 [2024-11-17 18:56:00.103458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.547 [2024-11-17 18:56:00.103475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.547 [2024-11-17 18:56:00.103718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.547 [2024-11-17 18:56:00.103939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.547 [2024-11-17 18:56:00.103959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.547 [2024-11-17 18:56:00.103972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.547 [2024-11-17 18:56:00.103985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.547 [2024-11-17 18:56:00.113360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:13.547 [2024-11-17 18:56:00.116735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.547 [2024-11-17 18:56:00.117209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.547 [2024-11-17 18:56:00.117237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.547 [2024-11-17 18:56:00.117255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.547 [2024-11-17 18:56:00.117470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.547 [2024-11-17 18:56:00.117699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.547 [2024-11-17 18:56:00.117721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.547 [2024-11-17 18:56:00.117735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.547 [2024-11-17 18:56:00.117749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.807 [2024-11-17 18:56:00.130219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.807 [2024-11-17 18:56:00.130702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.807 [2024-11-17 18:56:00.130740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.807 [2024-11-17 18:56:00.130759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.807 [2024-11-17 18:56:00.130997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.807 [2024-11-17 18:56:00.131262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.807 [2024-11-17 18:56:00.131283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.807 [2024-11-17 18:56:00.131299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.807 [2024-11-17 18:56:00.131315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.807 [2024-11-17 18:56:00.143789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.807 [2024-11-17 18:56:00.144212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.807 [2024-11-17 18:56:00.144255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.807 [2024-11-17 18:56:00.144271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.807 [2024-11-17 18:56:00.144515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.807 [2024-11-17 18:56:00.144767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.807 [2024-11-17 18:56:00.144788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.807 [2024-11-17 18:56:00.144802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.807 [2024-11-17 18:56:00.144815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.807 [2024-11-17 18:56:00.157278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.807 [2024-11-17 18:56:00.157707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.807 [2024-11-17 18:56:00.157752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.807 [2024-11-17 18:56:00.157769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.807 [2024-11-17 18:56:00.158012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.807 [2024-11-17 18:56:00.158211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.807 [2024-11-17 18:56:00.158231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.807 [2024-11-17 18:56:00.158244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.807 [2024-11-17 18:56:00.158256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.807 [2024-11-17 18:56:00.161758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.807 [2024-11-17 18:56:00.161789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.807 [2024-11-17 18:56:00.161818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.807 [2024-11-17 18:56:00.161830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.807 [2024-11-17 18:56:00.161840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.807 [2024-11-17 18:56:00.163265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:13.807 [2024-11-17 18:56:00.163327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:13.807 [2024-11-17 18:56:00.163330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.807 [2024-11-17 18:56:00.170831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.807 [2024-11-17 18:56:00.171330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.807 [2024-11-17 18:56:00.171365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.807 [2024-11-17 18:56:00.171385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.807 [2024-11-17 18:56:00.171622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.807 [2024-11-17 18:56:00.171867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.807 [2024-11-17 18:56:00.171889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.807 [2024-11-17 18:56:00.171906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.807 [2024-11-17 18:56:00.171923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.807 [2024-11-17 18:56:00.184393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.807 [2024-11-17 18:56:00.184889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.807 [2024-11-17 18:56:00.184928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.807 [2024-11-17 18:56:00.184948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.807 [2024-11-17 18:56:00.185186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.807 [2024-11-17 18:56:00.185401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.807 [2024-11-17 18:56:00.185438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.807 [2024-11-17 18:56:00.185454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.807 [2024-11-17 18:56:00.185471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.807 [2024-11-17 18:56:00.198107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.807 [2024-11-17 18:56:00.198634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.807 [2024-11-17 18:56:00.198683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.807 [2024-11-17 18:56:00.198707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.808 [2024-11-17 18:56:00.198933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.808 [2024-11-17 18:56:00.199167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.808 [2024-11-17 18:56:00.199189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.808 [2024-11-17 18:56:00.199204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.808 [2024-11-17 18:56:00.199221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.808 [2024-11-17 18:56:00.211729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.808 [2024-11-17 18:56:00.212269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.808 [2024-11-17 18:56:00.212308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.808 [2024-11-17 18:56:00.212339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.808 [2024-11-17 18:56:00.212579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.808 [2024-11-17 18:56:00.212826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.808 [2024-11-17 18:56:00.212849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.808 [2024-11-17 18:56:00.212866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.808 [2024-11-17 18:56:00.212883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.808 [2024-11-17 18:56:00.225295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.808 [2024-11-17 18:56:00.225763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.808 [2024-11-17 18:56:00.225798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.808 [2024-11-17 18:56:00.225818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.808 [2024-11-17 18:56:00.226054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.808 [2024-11-17 18:56:00.226270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.808 [2024-11-17 18:56:00.226291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.808 [2024-11-17 18:56:00.226306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.808 [2024-11-17 18:56:00.226321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.808 [2024-11-17 18:56:00.239037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.808 [2024-11-17 18:56:00.239538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.808 [2024-11-17 18:56:00.239576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.808 [2024-11-17 18:56:00.239596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.808 [2024-11-17 18:56:00.239829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.808 [2024-11-17 18:56:00.240066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.808 [2024-11-17 18:56:00.240088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.808 [2024-11-17 18:56:00.240104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.808 [2024-11-17 18:56:00.240120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.808 [2024-11-17 18:56:00.252545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.808 [2024-11-17 18:56:00.252942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.808 [2024-11-17 18:56:00.252972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.808 [2024-11-17 18:56:00.252989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.808 [2024-11-17 18:56:00.253220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.808 [2024-11-17 18:56:00.253462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.808 [2024-11-17 18:56:00.253485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.808 [2024-11-17 18:56:00.253500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.808 [2024-11-17 18:56:00.253514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.808 [2024-11-17 18:56:00.266040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.808 [2024-11-17 18:56:00.266384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.808 [2024-11-17 18:56:00.266412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.808 [2024-11-17 18:56:00.266429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.808 [2024-11-17 18:56:00.266644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.808 [2024-11-17 18:56:00.266902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.808 [2024-11-17 18:56:00.266924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.808 [2024-11-17 18:56:00.266938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.808 [2024-11-17 18:56:00.266952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.808 [2024-11-17 18:56:00.279579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.808 [2024-11-17 18:56:00.279927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.808 [2024-11-17 18:56:00.279956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.808 [2024-11-17 18:56:00.279973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.808 [2024-11-17 18:56:00.280187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.808 [2024-11-17 18:56:00.280406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.808 [2024-11-17 18:56:00.280427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.808 [2024-11-17 18:56:00.280441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.808 [2024-11-17 18:56:00.280455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.808 [2024-11-17 18:56:00.293201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.808 [2024-11-17 18:56:00.293543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.808 [2024-11-17 18:56:00.293570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.808 [2024-11-17 18:56:00.293587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.808 [2024-11-17 18:56:00.293810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.808 [2024-11-17 18:56:00.294029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.808 [2024-11-17 18:56:00.294051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.808 [2024-11-17 18:56:00.294065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.808 [2024-11-17 18:56:00.294083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.808 [2024-11-17 18:56:00.306918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.808 [2024-11-17 18:56:00.307259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.808 [2024-11-17 18:56:00.307288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.808 [2024-11-17 18:56:00.307304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.808 [2024-11-17 18:56:00.307519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.808 [2024-11-17 18:56:00.307747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.808 [2024-11-17 18:56:00.307769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.808 [2024-11-17 18:56:00.307784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.808 [2024-11-17 18:56:00.307797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.808 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:13.808 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:13.808 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:13.808 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:13.808 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:13.808 [2024-11-17 18:56:00.320491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.808 [2024-11-17 18:56:00.320850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.808 [2024-11-17 18:56:00.320883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.808 [2024-11-17 18:56:00.320901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.808 [2024-11-17 18:56:00.321132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.808 [2024-11-17 18:56:00.321344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.808 [2024-11-17 18:56:00.321364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.808 [2024-11-17 18:56:00.321378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.809 [2024-11-17 18:56:00.321392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.809 [2024-11-17 18:56:00.334231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.809 [2024-11-17 18:56:00.334563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.809 [2024-11-17 18:56:00.334593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.809 [2024-11-17 18:56:00.334610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.809 [2024-11-17 18:56:00.334839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.809 3839.17 IOPS, 15.00 MiB/s [2024-11-17T17:56:00.385Z] [2024-11-17 18:56:00.336595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.809 [2024-11-17 18:56:00.336619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.809 [2024-11-17 18:56:00.336648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.809 [2024-11-17 18:56:00.336661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.809 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:13.809 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:13.809 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.809 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:13.809 [2024-11-17 18:56:00.344251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.809 [2024-11-17 18:56:00.347819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.809 [2024-11-17 18:56:00.348208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.809 [2024-11-17 18:56:00.348237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.809 [2024-11-17 18:56:00.348254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.809 [2024-11-17 18:56:00.348469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.809 [2024-11-17 18:56:00.348723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.809 [2024-11-17 18:56:00.348745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.809 [2024-11-17 18:56:00.348759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.809 [2024-11-17 18:56:00.348772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.809 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.809 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:13.809 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.809 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:13.809 [2024-11-17 18:56:00.361412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.809 [2024-11-17 18:56:00.361842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.809 [2024-11-17 18:56:00.361878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.809 [2024-11-17 18:56:00.361897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.809 [2024-11-17 18:56:00.362119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.809 [2024-11-17 18:56:00.362354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.809 [2024-11-17 18:56:00.362374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.809 [2024-11-17 18:56:00.362391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.809 [2024-11-17 18:56:00.362407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.809 [2024-11-17 18:56:00.375028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:13.809 [2024-11-17 18:56:00.375399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:13.809 [2024-11-17 18:56:00.375437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:13.809 [2024-11-17 18:56:00.375456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:13.809 [2024-11-17 18:56:00.375682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:13.809 [2024-11-17 18:56:00.375901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:13.809 [2024-11-17 18:56:00.375923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:13.809 [2024-11-17 18:56:00.375938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:13.809 [2024-11-17 18:56:00.375953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:13.809 Malloc0 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:14.067 [2024-11-17 18:56:00.388584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.067 [2024-11-17 18:56:00.388934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:14.067 [2024-11-17 18:56:00.388964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2170970 with addr=10.0.0.2, port=4420 00:35:14.067 [2024-11-17 18:56:00.388981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170970 is same with the state(6) to be set 00:35:14.067 [2024-11-17 18:56:00.389197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2170970 (9): Bad file descriptor 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:14.067 [2024-11-17 18:56:00.389415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:14.067 [2024-11-17 18:56:00.389436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:14.067 [2024-11-17 18:56:00.389450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:14.067 [2024-11-17 18:56:00.389464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:14.067 [2024-11-17 18:56:00.401235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.067 [2024-11-17 18:56:00.402353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.067 18:56:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 889650 00:35:14.067 [2024-11-17 18:56:00.512734] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:15.932 4332.00 IOPS, 16.92 MiB/s [2024-11-17T17:56:03.442Z] 4901.12 IOPS, 19.15 MiB/s [2024-11-17T17:56:04.375Z] 5339.11 IOPS, 20.86 MiB/s [2024-11-17T17:56:05.749Z] 5699.00 IOPS, 22.26 MiB/s [2024-11-17T17:56:06.683Z] 5983.64 IOPS, 23.37 MiB/s [2024-11-17T17:56:07.616Z] 6226.25 IOPS, 24.32 MiB/s [2024-11-17T17:56:08.548Z] 6428.54 IOPS, 25.11 MiB/s [2024-11-17T17:56:09.480Z] 6601.57 IOPS, 25.79 MiB/s [2024-11-17T17:56:09.480Z] 6755.40 IOPS, 26.39 MiB/s 00:35:22.904 Latency(us) 00:35:22.904 [2024-11-17T17:56:09.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.904 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:22.904 Verification LBA range: start 0x0 length 0x4000 00:35:22.904 Nvme1n1 : 15.01 6758.82 26.40 10387.12 0.00 7443.11 782.79 21359.88 00:35:22.904 [2024-11-17T17:56:09.480Z] =================================================================================================================== 00:35:22.904 [2024-11-17T17:56:09.480Z] Total : 6758.82 26.40 10387.12 0.00 7443.11 782.79 21359.88 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:23.209 rmmod nvme_tcp 00:35:23.209 rmmod nvme_fabrics 00:35:23.209 rmmod nvme_keyring 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 890323 ']' 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 890323 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 890323 ']' 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 890323 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 890323 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 890323' 00:35:23.209 killing process with pid 890323 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 890323 00:35:23.209 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 890323 00:35:23.493 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:23.493 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:23.493 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:23.493 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:23.493 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:23.493 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:23.493 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:23.494 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:23.494 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:23.494 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.494 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:23.494 18:56:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.402 18:56:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:25.402 00:35:25.402 real 0m22.104s 00:35:25.402 user 0m59.234s 00:35:25.402 sys 0m4.082s 00:35:25.402 18:56:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:25.402 18:56:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.402 ************************************ 00:35:25.402 END TEST nvmf_bdevperf 00:35:25.402 ************************************ 00:35:25.402 18:56:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:25.402 18:56:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:25.402 18:56:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:25.402 18:56:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.402 ************************************ 00:35:25.402 START TEST nvmf_target_disconnect 00:35:25.402 ************************************ 00:35:25.402 18:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:25.661 * Looking for test storage... 00:35:25.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:25.661 18:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:25.661 18:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:35:25.661 18:56:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:25.661 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:25.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.661 --rc genhtml_branch_coverage=1 00:35:25.661 --rc genhtml_function_coverage=1 00:35:25.661 --rc genhtml_legend=1 00:35:25.661 --rc geninfo_all_blocks=1 00:35:25.661 --rc geninfo_unexecuted_blocks=1 00:35:25.661 00:35:25.661 ' 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:25.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.662 --rc genhtml_branch_coverage=1 00:35:25.662 --rc genhtml_function_coverage=1 00:35:25.662 --rc genhtml_legend=1 00:35:25.662 --rc geninfo_all_blocks=1 00:35:25.662 --rc geninfo_unexecuted_blocks=1 00:35:25.662 00:35:25.662 ' 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:25.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.662 --rc genhtml_branch_coverage=1 00:35:25.662 --rc genhtml_function_coverage=1 00:35:25.662 --rc genhtml_legend=1 00:35:25.662 --rc geninfo_all_blocks=1 00:35:25.662 --rc geninfo_unexecuted_blocks=1 00:35:25.662 00:35:25.662 ' 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:25.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.662 --rc genhtml_branch_coverage=1 00:35:25.662 --rc genhtml_function_coverage=1 00:35:25.662 --rc genhtml_legend=1 00:35:25.662 --rc geninfo_all_blocks=1 00:35:25.662 --rc geninfo_unexecuted_blocks=1 00:35:25.662 00:35:25.662 ' 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:25.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:25.662 18:56:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.567 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:27.568 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:27.568 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:27.568 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:27.568 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.568 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.827 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.827 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.827 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.827 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.827 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.827 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:27.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:35:27.827 00:35:27.827 --- 10.0.0.2 ping statistics --- 00:35:27.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.827 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:35:27.827 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:35:27.827 00:35:27.827 --- 10.0.0.1 ping statistics --- 00:35:27.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.827 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:35:27.827 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:27.828 ************************************ 00:35:27.828 START TEST nvmf_target_disconnect_tc1 00:35:27.828 ************************************ 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:27.828 [2024-11-17 18:56:14.337212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:27.828 [2024-11-17 18:56:14.337286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2232610 with addr=10.0.0.2, port=4420 00:35:27.828 [2024-11-17 18:56:14.337315] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:27.828 [2024-11-17 18:56:14.337338] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:27.828 [2024-11-17 18:56:14.337351] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:27.828 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:27.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:27.828 Initializing NVMe Controllers 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:27.828 00:35:27.828 real 0m0.097s 00:35:27.828 user 0m0.042s 00:35:27.828 sys 0m0.055s 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:27.828 ************************************ 00:35:27.828 END TEST nvmf_target_disconnect_tc1 00:35:27.828 ************************************ 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:27.828 ************************************ 00:35:27.828 START TEST nvmf_target_disconnect_tc2 00:35:27.828 ************************************ 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=893359 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 893359 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 893359 ']' 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.828 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:28.087 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:28.087 [2024-11-17 18:56:14.453593] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:28.087 [2024-11-17 18:56:14.453692] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:28.087 [2024-11-17 18:56:14.531057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:28.087 [2024-11-17 18:56:14.577474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:28.087 [2024-11-17 18:56:14.577542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:28.087 [2024-11-17 18:56:14.577569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:28.087 [2024-11-17 18:56:14.577581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:28.087 [2024-11-17 18:56:14.577592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:28.087 [2024-11-17 18:56:14.579103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:28.087 [2024-11-17 18:56:14.579165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:28.087 [2024-11-17 18:56:14.579229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:28.087 [2024-11-17 18:56:14.579232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:28.345 Malloc0 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:28.345 [2024-11-17 18:56:14.786913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:28.345 [2024-11-17 18:56:14.815194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:28.345 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.346 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=893502 00:35:28.346 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:28.346 18:56:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:30.907 18:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 893359 00:35:30.907 18:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Write completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 [2024-11-17 18:56:16.840544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.907 starting I/O failed 00:35:30.907 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 [2024-11-17 18:56:16.840922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Write completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 Read completed with error (sct=0, sc=8) 00:35:30.908 starting I/O failed 00:35:30.908 [2024-11-17 18:56:16.841252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:30.908 [2024-11-17 18:56:16.841423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.841463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.841564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.841590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.841723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.841757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.841866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.841902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.842018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.842071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.842198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.842226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.842356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.842382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.842539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.842566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.842691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.842719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.842805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.842832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.842924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.842950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.843026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.843053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.843140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.843167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.843288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.843315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.843408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.843435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.843551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.843578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.908 [2024-11-17 18:56:16.843664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.908 [2024-11-17 18:56:16.843700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.908 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.843791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.843827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.843919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.843945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.844057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.844084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.844160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.844187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.844270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.844296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.844378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.844404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.844515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.844554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Write completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Write completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Write completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Write completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Write completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Write completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Write completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Write completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 Read completed with error (sct=0, sc=8) 00:35:30.909 starting I/O failed 00:35:30.909 [2024-11-17 18:56:16.844877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:35:30.909 [2024-11-17 18:56:16.844962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.845000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.845127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.845156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.845311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.845337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.845430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.845457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.845609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.845637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.845748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.845778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.845887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.845915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.846061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.846088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.846201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.846227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.846321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.846347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.846496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.846543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.846655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.846690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.846780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.846806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.846892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.846924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.847008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.847035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.847176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.847202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.847287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.847314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.847429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.847455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.847550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.909 [2024-11-17 18:56:16.847596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.909 qpair failed and we were unable to recover it. 00:35:30.909 [2024-11-17 18:56:16.847700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.847728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.847832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.847859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.847955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.847980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.848091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.848116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.848206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.848232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.848328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.848355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.848491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.848530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.848635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.848682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.848834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.848862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.848963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.848990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.849100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.849125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.849214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.849240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.849355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.849382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.849503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.849529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.849624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.849652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.849754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.849783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.849872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.849898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.850011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.850038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.850183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.850210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.850351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.850378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.850469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.850496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.850592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.850625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.850742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.850781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.850879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.850907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.851047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.851073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.851163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.851189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.851305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.851331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.851445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.851471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.851585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.851612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.851710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.851739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.851842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.851870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.851962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.851989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.852114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.852139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.852234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.852260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.852347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.852373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.852487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.852513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.852626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.852653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.910 [2024-11-17 18:56:16.852755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.910 [2024-11-17 18:56:16.852782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.910 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.852875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.852901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.852978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.853004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.853120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.853146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.853252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.853278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.853366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.853392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.853498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.853537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.853658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.853699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.853799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.853826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.853907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.853934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.854048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.854074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.854193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.854220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.854311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.854337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.854452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.854477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.854574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.854614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.854701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.854733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.854819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.854846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.854934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.854961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.855046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.855072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.855165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.855192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.855275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.855301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.855884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.855911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.856026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.856052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.856189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.856214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.856301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.856334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.856420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.856445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.856532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.856557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.856680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.856706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.856805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.856830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.856921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.856946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.857058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.857083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.857197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.857222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.857368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.857394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.857505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.857531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.857645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.857670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.857794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.857820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.857907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.857932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.858048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.858073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.911 [2024-11-17 18:56:16.858182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.911 [2024-11-17 18:56:16.858221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.911 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.858345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.858373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.858463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.858490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.858634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.858662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.858763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.858791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.858931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.858956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.859071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.859097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.859263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.859289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.859402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.859427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.859574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.859600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.859702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.859740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.859843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.859870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.859982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.860008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.860181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.860229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.860449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.860503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.860620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.860647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.860781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.860806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.860919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.860944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.861023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.861051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.861209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.861261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.861435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.861505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.861621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.861647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.861795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.861834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.861932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.861960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.862043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.862069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.862152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.862177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.862275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.862302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.862421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.862448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.862562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.862588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.862699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.862726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.862811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.862836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.912 qpair failed and we were unable to recover it. 00:35:30.912 [2024-11-17 18:56:16.862916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.912 [2024-11-17 18:56:16.862942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.863027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.863054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.863169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.863195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.863337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.863362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.863476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.863502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.863621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.863649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.863788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.863827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.863965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.863992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.864135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.864163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.864310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.864337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.864423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.864448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.864562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.864589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.864706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.864736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.864855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.864881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.864962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.864987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.865098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.865124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.865242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.865268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.865355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.865382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.865502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.865530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.865622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.865650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.865758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.865785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.865871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.865898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.866027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.866057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.866141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.866168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.866256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.866282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.866422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.866448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.866587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.866626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.866777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.866806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.866920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.866946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.867057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.867084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.867202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.867228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.867337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.867362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.867499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.867524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.867639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.867664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.867766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.867792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.867907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.867932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.868042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.868067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.868159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.913 [2024-11-17 18:56:16.868185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.913 qpair failed and we were unable to recover it. 00:35:30.913 [2024-11-17 18:56:16.868294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.868320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.868407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.868433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.868516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.868542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.868649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.868698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.868817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.868844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.868946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.868985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.869077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.869105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.869225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.869252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.869343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.869369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.869484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.869512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.869595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.869622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.869747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.869782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.869881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.869907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.869991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.870017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.870098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.870124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.870214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.870240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.870350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.870375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.870463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.870490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.870606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.870632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.870758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.870797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.870890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.870918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.871008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.871036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.871176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.871203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.871323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.871351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.871470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.871499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.871626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.871654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.871749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.871775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.871860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.871886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.871974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.871999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.872112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.872138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.872232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.872259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.872348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.872376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.872459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.872488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.872602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.872628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.872748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.872774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.872890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.872916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.872998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.873023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.873161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.914 [2024-11-17 18:56:16.873187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.914 qpair failed and we were unable to recover it. 00:35:30.914 [2024-11-17 18:56:16.873303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.873330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.873451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.873476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.873596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.873623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.873716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.873743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.873832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.873860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.873950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.873975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.874197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.874258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.874485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.874536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.874650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.874682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.874767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.874793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.874908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.874933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.875072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.875098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.875226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.875266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.875390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.875423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.875545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.875571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.875690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.875717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.875837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.875864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.875981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.876009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.876127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.876154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.876248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.876275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.876357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.876383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.876467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.876494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.876587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.876625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.876761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.876801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.876903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.876930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.877013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.877039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.877187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.877212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.877307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.877333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.877456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.877483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.877594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.877621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.877712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.877740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.877835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.877862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.878003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.878029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.878146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.878174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.878299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.878326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.878410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.878436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.878526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.878552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.878663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.915 [2024-11-17 18:56:16.878706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.915 qpair failed and we were unable to recover it. 00:35:30.915 [2024-11-17 18:56:16.878791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.878817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.878935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.878961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.879051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.879080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.879164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.879191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.879329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.879355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.879442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.879468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.879626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.879665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.879808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.879838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.879955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.879982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.880190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.880216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.880307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.880333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.880416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.880442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.880551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.880577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.880664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.880697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.880784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.880810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.880919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.880945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.881057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.881083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.881191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.881217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.881337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.881362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.881497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.881536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.881680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.881709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.881798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.881824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.881940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.881966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.882055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.882082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.882195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.882221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.882376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.882434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.882523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.882549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.882682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.882721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.882815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.882843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.882948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.882986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.883074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.883102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.883191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.883217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.883324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.883349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.883446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.883473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.916 [2024-11-17 18:56:16.883607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.916 [2024-11-17 18:56:16.883635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.916 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.883734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.883764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.883888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.883915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.883998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.884025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.884119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.884147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.884268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.884294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.884421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.884460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.884549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.884576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.884694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.884722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.884815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.884841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.884950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.884977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.885058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.885084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.885223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.885248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.885361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.885387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.885481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.885510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.885650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.885682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.885798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.885824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.885905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.885930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.886123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.886148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.886290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.886316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.886398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.886425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.886540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.886566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.886696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.886735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.886861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.886890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.886972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.886998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.887090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.887118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.887232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.887258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.887397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.887423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.887504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.887531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.887686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.887714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.887826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.887855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.887973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.888001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.888111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.888137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.888295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.888351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.888461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.888487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.888579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.888612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.888758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.888785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.917 qpair failed and we were unable to recover it. 00:35:30.917 [2024-11-17 18:56:16.888927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.917 [2024-11-17 18:56:16.888953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.889042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.889070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.889277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.889328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.889522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.889548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.889667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.889702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.889817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.889844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.889931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.889957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.890075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.890101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.890268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.890336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.890432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.890458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.890544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.890572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.890688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.890715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.890862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.890888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.891002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.891029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.891138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.891165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.891310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.891336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.891444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.891470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.891553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.891579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.891692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.891718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.891804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.891831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.891942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.891968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.892048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.892074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.892188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.892216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.892317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.892356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.892490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.892530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.892652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.892686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.892805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.892832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.892914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.892941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.893060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.893086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.893172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.893200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.893319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.893345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.893432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.893457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.893544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.893570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.893660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.893694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.893812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.893838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.893953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.893979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.894088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.894113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.894257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.894283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.918 qpair failed and we were unable to recover it. 00:35:30.918 [2024-11-17 18:56:16.894394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.918 [2024-11-17 18:56:16.894420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.894548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.894587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.894721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.894748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.894867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.894894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.895007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.895033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.895145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.895172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.895312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.895338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.895424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.895451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.895595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.895621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.895696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.895722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.895830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.895856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.895968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.895994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.896131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.896157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.896246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.896273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.896370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.896397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.896515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.896541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.896660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.896691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.896781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.896807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.896951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.896977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.897115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.897141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.897258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.897284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.897396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.897422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.897519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.897547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.897657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.897687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.897802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.897828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.897909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.897935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.898135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.898190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.898315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.898357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.898479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.898506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.898634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.898680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.898781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.898810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.898894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.898920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.899005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.899031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.899196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.899222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.899334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.899361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.899564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.899592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.899704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.899731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.899842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.919 [2024-11-17 18:56:16.899868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.919 qpair failed and we were unable to recover it. 00:35:30.919 [2024-11-17 18:56:16.899983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.900009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.900102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.900129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.900251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.900277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.900406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.900433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.900539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.900565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.900642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.900668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.900762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.900788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.900903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.900929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.901019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.901045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.901159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.901187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.901299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.901326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.901521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.901547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.901688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.901715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.901854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.901893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.901997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.902036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.902196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.902249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.902398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.902469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.902558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.902585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.902703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.902731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.902828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.902854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.902972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.902998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.903109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.903135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.903256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.903309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.903424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.903450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.903558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.903584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.903725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.903752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.903846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.903871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.904064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.904090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.904282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.904309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.904451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.904477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.904563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.904588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.904713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.904740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.904855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.904882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.904997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.905024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.905109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.905135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.905275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.905302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.905414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.905439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.905554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.905582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.920 qpair failed and we were unable to recover it. 00:35:30.920 [2024-11-17 18:56:16.905713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.920 [2024-11-17 18:56:16.905752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.905872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.905900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.906035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.906060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.906211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.906260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.906347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.906372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.906512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.906544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.906689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.906728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.906847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.906875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.907018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.907046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.907139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.907165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.907245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.907272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.907385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.907412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.907524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.907551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.907707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.907746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.907869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.907896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.907995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.908021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.908130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.908157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.908237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.908263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.908377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.908403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.908492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.908519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.908604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.908630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.908769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.908808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.908930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.908957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.909066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.909092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.909183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.909208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.909327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.909381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.909470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.909496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.909579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.909606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.909721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.909747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.909830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.909856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.909946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.909971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.910058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.910084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.910208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.910247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.910366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.910394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.921 qpair failed and we were unable to recover it. 00:35:30.921 [2024-11-17 18:56:16.910509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.921 [2024-11-17 18:56:16.910535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.910624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.910652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.910735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.910763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.910856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.910882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.910991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.911018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.911171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.911224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.911391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.911419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.911536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.911563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.911683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.911710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.911794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.911821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.911905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.911930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.912026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.912051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.912142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.912169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.912284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.912312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.912395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.912421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.912527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.912552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.912666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.912704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.912899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.912925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.913065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.913091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.913285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.913311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.913452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.913478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.913600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.913628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.913724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.913753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.913888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.913914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.914045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.914101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.914260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.914311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.914394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.914420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.914565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.914591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.914705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.914732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.914817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.914842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.914930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.914958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.915093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.915119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.915208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.915234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.915384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.915434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.915533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.915572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.915689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.915728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.915858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.915886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.916008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.916035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.922 qpair failed and we were unable to recover it. 00:35:30.922 [2024-11-17 18:56:16.916181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.922 [2024-11-17 18:56:16.916214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.916359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.916415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.916530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.916558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.916685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.916712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.916906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.916933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.917127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.917179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.917372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.917398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.917514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.917540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.917655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.917689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.917776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.917802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.917916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.917942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.918048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.918074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.918186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.918212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.918339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.918390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.918535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.918574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.918708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.918747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.918849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.918888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.919036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.919065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.919182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.919209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.919345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.919371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.919458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.919485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.919617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.919656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.919771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.919801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.919912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.919939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.920135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.920188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.920361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.920412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.920495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.920523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.920634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.920665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.920782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.920808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.920894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.920920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.921034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.921060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.921170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.921196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.921311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.921337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.921450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.921475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.921584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.921610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.921700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.921727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.921837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.921863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.922001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.922026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.923 qpair failed and we were unable to recover it. 00:35:30.923 [2024-11-17 18:56:16.922117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.923 [2024-11-17 18:56:16.922143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.922260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.922285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.922413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.922452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.922554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.922582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.922734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.922762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.922881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.922907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.923022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.923048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.923157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.923183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.923335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.923361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.923442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.923469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.923602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.923641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.923744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.923772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.923865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.923891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.924000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.924026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.924186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.924212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.924358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.924410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.924498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.924524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.924649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.924681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.924794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.924821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.924938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.924967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.925089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.925119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.925243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.925269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.925353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.925379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.925463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.925489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.925571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.925597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.925711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.925738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.925816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.925842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.925955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.925981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.926097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.926122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.926206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.926238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.926351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.926377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.926488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.926516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.926600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.926629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.926724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.926751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.926869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.926895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.927029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.927055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.927170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.927198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.927290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.927316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.924 qpair failed and we were unable to recover it. 00:35:30.924 [2024-11-17 18:56:16.927451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.924 [2024-11-17 18:56:16.927479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.927564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.927589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.927693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.927720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.927804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.927830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.927923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.927949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.928067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.928094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.928241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.928266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.928353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.928378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.928466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.928492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.928605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.928631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.928723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.928749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.928864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.928891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.929037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.929063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.929148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.929174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.929303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.929332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.929475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.929504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.929608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.929648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.929799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.929826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.929918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.929951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.930065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.930091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.930173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.930199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.930346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.930398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.930486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.930512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.930647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.930693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.930793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.930822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.930977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.931016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.931163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.931190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.931310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.931364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.931491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.931517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.931604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.931629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.931757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.931784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.931897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.931923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.932014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.932040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.932132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.932158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.932271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.932297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.932408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.932434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.932517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.932543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.932644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.932693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.925 [2024-11-17 18:56:16.932845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.925 [2024-11-17 18:56:16.932873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.925 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.932991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.933018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.933095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.933121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.933204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.933231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.933327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.933354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.933471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.933497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.933605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.933643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.933775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.933808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.933952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.933977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.934061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.934088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.934238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.934288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.934475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.934525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.934613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.934638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.934728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.934754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.934869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.934895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.935008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.935033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.935142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.935168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.935276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.935301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.935384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.935410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.935511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.935550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.935646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.935681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.935778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.935805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.935921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.935947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.936029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.936055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.936193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.936220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.936333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.936359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.936448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.936474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.936567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.936606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.936758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.936786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.936896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.936921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.936997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.926 [2024-11-17 18:56:16.937023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.926 qpair failed and we were unable to recover it. 00:35:30.926 [2024-11-17 18:56:16.937138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.937163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.937284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.937334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.937452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.937479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.937622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.937662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.937776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.937806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.937922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.937948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.938091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.938117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.938203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.938229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.938368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.938394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.938510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.938536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.938624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.938653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.938748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.938775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.938859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.938885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.938979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.939004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.939117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.939142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.939228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.939254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.939344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.939378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.939469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.939497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.939641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.939667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.939791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.939817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.939902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.939929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.940010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.940036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.940126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.940152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.940239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.940267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.940451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.940501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.940584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.940610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.940720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.940747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.940862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.940888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.941004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.941031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.941176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.941201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.941291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.941317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.941402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.941430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.941551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.941578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.941694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.941721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.941803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.941829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.941967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.941993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.942105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.942132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.942250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.927 [2024-11-17 18:56:16.942276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.927 qpair failed and we were unable to recover it. 00:35:30.927 [2024-11-17 18:56:16.942363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.942389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.942520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.942559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.942683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.942711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.942822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.942849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.942973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.942999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.943081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.943111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.943320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.943371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.943486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.943512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.943652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.943684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.943768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.943794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.943875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.943901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.944008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.944035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.944227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.944253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.944364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.944390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.944492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.944531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.944652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.944688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.944784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.944812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.944929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.944955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.945095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.945149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.945307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.945361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.945453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.945479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.945591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.945616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.945711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.945738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.945826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.945853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.945992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.946018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.946128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.946153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.946235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.946260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.946344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.946369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.946497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.946535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.946661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.946694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.946834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.946861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.947026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.947080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.947263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.947318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.947500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.947549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.947640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.947666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.947792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.947819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.947928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.947954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.948094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.928 [2024-11-17 18:56:16.948121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.928 qpair failed and we were unable to recover it. 00:35:30.928 [2024-11-17 18:56:16.948210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.948236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.948384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.948434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.948574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.948601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.948689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.948717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.948830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.948856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.948970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.948997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.949141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.949192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.949285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.949310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.949433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.949459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.949549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.949575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.949688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.949714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.949798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.949824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.949938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.949964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.950075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.950101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.950178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.950205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.950339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.950379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.950483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.950521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.950642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.950670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.950764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.950791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.950929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.950955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.951055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.951099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.951248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.951309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.951457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.951508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.951595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.951621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.951741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.951769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.951891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.951921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.952032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.952080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.952226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.952276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.952363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.952390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.952501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.952526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.952607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.952633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.952781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.952808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.952892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.952918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.953029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.953054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.953140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.953166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.953314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.953339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.953425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.953450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.953529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.929 [2024-11-17 18:56:16.953557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.929 qpair failed and we were unable to recover it. 00:35:30.929 [2024-11-17 18:56:16.953684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.953714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.953831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.953858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.953966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.953992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.954081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.954108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.954224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.954250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.954367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.954393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.954478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.954504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.954600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.954638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.954795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.954824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.954905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.954931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.955051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.955077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.955217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.955243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.955320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.955347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.955425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.955451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.955557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.955583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.955718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.955757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.955907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.955934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.956054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.956080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.956195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.956221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.956336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.956364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.956447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.956473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.956625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.956653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.956755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.956782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.956875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.956906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.956996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.957023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.957221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.957271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.957357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.957384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.957497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.957523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.957615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.957640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.957789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.957828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.957917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.957945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.958062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.958088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.958204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.958230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.930 qpair failed and we were unable to recover it. 00:35:30.930 [2024-11-17 18:56:16.958339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.930 [2024-11-17 18:56:16.958366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.958452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.958479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.958589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.958615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.958754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.958781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.958908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.958947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.959063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.959091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.959176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.959203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.959315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.959341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.959430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.959458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.959549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.959579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.959677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.959705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.959849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.959876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.959965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.959991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.960078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.960104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.960223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.960249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.960392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.960418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.960531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.960558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.960671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.960707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.960861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.960888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.960975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.961001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.961095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.961121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.961269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.961295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.961433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.961459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.961541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.961566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.961658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.961692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.961800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.961826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.961941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.961967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.962058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.962084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.962205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.962231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.962345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.962371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.962471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.962511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.962662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.962700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.962818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.962844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.962962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.962988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.963075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.963101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.963212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.963238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.963344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.963370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.963464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.931 [2024-11-17 18:56:16.963491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.931 qpair failed and we were unable to recover it. 00:35:30.931 [2024-11-17 18:56:16.963605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.963631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.963770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.963798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.963911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.963937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.964022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.964049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.964160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.964186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.964277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.964303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.964387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.964418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.964532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.964558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.964668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.964700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.964808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.964834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.964909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.964935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.965053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.965078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.965218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.965243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.965360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.965389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.965514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.965553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.965711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.965751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.965901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.965929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.966046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.966072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.966172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.966199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.966281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.966307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.966451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.966477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.966593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.966620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.966709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.966737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.966868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.966908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.967036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.967063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.967179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.967206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.967322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.967349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.967491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.967517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.967661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.967697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.967810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.967837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.967952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.967978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.968092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.968118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.968234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.968260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.968354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.968392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.968520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.968546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.968625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.968651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.968773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.968800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.968933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.932 [2024-11-17 18:56:16.968960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.932 qpair failed and we were unable to recover it. 00:35:30.932 [2024-11-17 18:56:16.969069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.969095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.969188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.969214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.969294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.969320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.969429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.969454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.969542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.969569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.969687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.969714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.969829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.969856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.969966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.969992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.970079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.970113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.970254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.970304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.970394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.970422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.970534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.970560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.970644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.970670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.970773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.970800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.970939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.970965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.971050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.971076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.971215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.971241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.971329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.971355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.971488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.971528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.971649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.971688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.971805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.971831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.971947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.971972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.972069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.972094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.972201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.972241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.972373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.972399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.972511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.972536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.972645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.972692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.972825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.972852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.972994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.973020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.973213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.973239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.973377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.973404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.973514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.973540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.973660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.973694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.973775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.973800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.973881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.973907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.973984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.974013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.974207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.974256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.933 [2024-11-17 18:56:16.974400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.933 [2024-11-17 18:56:16.974453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.933 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.974601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.974629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.974754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.974784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.974905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.974932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.975049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.975075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.975226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.975276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.975397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.975424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.975548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.975574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.975715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.975742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.975857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.975884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.975962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.975990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.976074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.976100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.976248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.976274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.976390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.976418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.976556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.976596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.976732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.976771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.976898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.976926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.977043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.977068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.977207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.977255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.977432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.977486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.977568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.977595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.977703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.977742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.977872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.977899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.978041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.978068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.978183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.978209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.978381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.978444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.978567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.978595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.978713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.978739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.978825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.978850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.978938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.978963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.979114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.979163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.979254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.979279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.979417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.979442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.979524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.979549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.979693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.979719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.979803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.979829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.979948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.979976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.980063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.980089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.934 [2024-11-17 18:56:16.980170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.934 [2024-11-17 18:56:16.980204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.934 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.980386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.980438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.980554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.980581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.980699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.980726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.980846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.980872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.980984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.981010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.981096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.981121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.981236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.981263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.981456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.981483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.981602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.981629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.981759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.981786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.981907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.981933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.982072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.982098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.982212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.982237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.982332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.982358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.982438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.982464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.982597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.982635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.982743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.982782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.982880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.982907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.983043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.983094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.983278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.983328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.983542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.983599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.983720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.983747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.983837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.983865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.983988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.984016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.984128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.984154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.984297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.984323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.984468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.984494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.984582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.984608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.984723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.984750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.984840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.984866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.984974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.985000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.985114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.985141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.985258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.935 [2024-11-17 18:56:16.985285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.935 qpair failed and we were unable to recover it. 00:35:30.935 [2024-11-17 18:56:16.985405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.985432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.985524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.985550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.985656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.985691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.985807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.985833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.985918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.985943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.986028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.986054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.986148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.986179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.986290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.986316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.986401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.986427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.986543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.986570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.986652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.986685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.986801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.986827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.986957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.986995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.987113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.987140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.987219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.987246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.987334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.987361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.987476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.987502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.987622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.987648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.987776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.987803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.987891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.987917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.988017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.988044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.988134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.988160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.988249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.988276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.988365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.988392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.988473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.988498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.988585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.988611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.988728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.988755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.988835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.988861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.988980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.989005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.989118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.989144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.989232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.989258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.989346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.989371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.989482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.989507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.989611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.989636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.989726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.989753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.989861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.989889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.989970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.989996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.936 [2024-11-17 18:56:16.990106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.936 [2024-11-17 18:56:16.990133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.936 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.990250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.990275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.990392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.990419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.990559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.990585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.990703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.990729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.990841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.990866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.990947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.990972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.991058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.991083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.991196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.991222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.991333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.991365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.991512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.991537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.991664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.991696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.991780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.991805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.991893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.991919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.992055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.992080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.992188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.992215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.992301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.992326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.992436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.992462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.992556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.992582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.992725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.992752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.992838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.992864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.992954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.992981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.993058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.993084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.993201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.993229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.993352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.993377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.993518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.993544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.993663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.993696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.993785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.993810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.993890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.993915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.994035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.994061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.994140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.994166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.994280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.994306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.994419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.994446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.994546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.994585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.994703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.994732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.994853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.994879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.995005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.995032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.995119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.995145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.937 qpair failed and we were unable to recover it. 00:35:30.937 [2024-11-17 18:56:16.995257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.937 [2024-11-17 18:56:16.995283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.995396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.995422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.995512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.995538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.995623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.995649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.995740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.995767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.995849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.995876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.995965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.995991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.996074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.996102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.996235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.996274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.996400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.996428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.996547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.996572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.996663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.996703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.996819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.996846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.996933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.996958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.997070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.997096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.997184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.997209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.997321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.997346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.997433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.997459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.997574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.997599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.997684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.997710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.997791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.997817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.997907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.997934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.998048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.998074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.998186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.998212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.998327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.998354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.998498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.998524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.998639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.998665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.998768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.998794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.998877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.998905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.999020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.999045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.999122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.999148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.999235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.999263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.999400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.999426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.999535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.999560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.999686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.999712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.999795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.999822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:16.999918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:16.999947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:17.000036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:17.000062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.938 [2024-11-17 18:56:17.000175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.938 [2024-11-17 18:56:17.000202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.938 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.000349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.000375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.000460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.000486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.000597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.000624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.000741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.000767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.000961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.000989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.001181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.001208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.001329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.001356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.001476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.001502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.001593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.001619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.001711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.001740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.001856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.001882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.002020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.002046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.002157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.002183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.002327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.002383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.002469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.002495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.002573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.002599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.002717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.002743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.002825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.002852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.002957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.002983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.003072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.003097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.003214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.003240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.003360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.003388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.003505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.003532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.003672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.003704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.003799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.003825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.003941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.003968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.004065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.004091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.004184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.004211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.004404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.004430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.004569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.004595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.004684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.004711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.004826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.004854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.004999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.005025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.939 [2024-11-17 18:56:17.005164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.939 [2024-11-17 18:56:17.005190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.939 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.005326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.005377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.005518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.005544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.005662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.005696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.005836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.005862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.005974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.006000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.006087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.006113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.006237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.006263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.006379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.006405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.006519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.006547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.006664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.006697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.006817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.006843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.006986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.007014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.007145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.007171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.007279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.007305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.007445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.007471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.007588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.007615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.007701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.007728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.007841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.007867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.007949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.007975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.008068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.008098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.008175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.008201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.008310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.008336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.008419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.008446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.008560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.008587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.008698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.008726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.008834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.008860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.008954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.008981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.009129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.009156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.009271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.009298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.009379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.009405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.009522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.009548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.009671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.009717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.009843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.009871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.009998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.010025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.010124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.010152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.010317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.010368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.010459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.010486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.940 [2024-11-17 18:56:17.010579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.940 [2024-11-17 18:56:17.010605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.940 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.010749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.010776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.010864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.010890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.011004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.011030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.011105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.011131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.011245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.011272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.011414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.011441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.011549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.011575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.011713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.011740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.011831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.011862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.011973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.011999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.012137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.012163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.012305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.012331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.012442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.012468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.012580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.012606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.012744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.012771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.012883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.012908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.013021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.013047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.013137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.013162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.013275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.013301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.013411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.013437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.013529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.013555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.013746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.013774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.013876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.013902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.014018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.014044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.014194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.014219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.014331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.014357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.014452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.014478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.014589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.014614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.014724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.014751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.014871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.014898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.015039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.015065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.015150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.015176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.015258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.015285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.015401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.015427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.015511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.015537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.015654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.015686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.015787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.015813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.941 [2024-11-17 18:56:17.015895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.941 [2024-11-17 18:56:17.015922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.941 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.016064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.016092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.016212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.016238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.016350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.016376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.016495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.016521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.016631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.016657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.016791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.016818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.016930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.016956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.017066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.017092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.017186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.017212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.017303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.017328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.017415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.017442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.017604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.017645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.017779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.017809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.017905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.017932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.018046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.018074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.018193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.018219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.018314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.018340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.018456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.018483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.018629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.018654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.018803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.018829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.018956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.018983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.019067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.019093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.019178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.019204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.019294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.019321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.019437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.019463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.019586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.019612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.019703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.019730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.019879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.019927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.020032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.020081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.020170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.020196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.020336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.020363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.020502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.020528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.020707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.020734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.020872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.020898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.021029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.021083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.021226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.021253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.021368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.021394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.021505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.942 [2024-11-17 18:56:17.021532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.942 qpair failed and we were unable to recover it. 00:35:30.942 [2024-11-17 18:56:17.021649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.021689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.021785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.021811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.021893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.021920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.022064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.022090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.022205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.022231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.022348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.022374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.022485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.022512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.022601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.022627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.022724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.022751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.022858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.022884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.022996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.023021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.023103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.023128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.023243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.023268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.023384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.023410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.023500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.023527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.023642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.023670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.023766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.023791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.023934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.023960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.024070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.024096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.024187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.024213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.024331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.024357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.024499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.024524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.024635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.024660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.024753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.024779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.024865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.024890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.024971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.024997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.025105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.025131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.025249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.025280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.025396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.025422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.025513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.025538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.025647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.025672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.025773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.025798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.025882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.025907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.026044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.026069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.026162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.026187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.026268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.026293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.026404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.026430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.026523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.943 [2024-11-17 18:56:17.026549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.943 qpair failed and we were unable to recover it. 00:35:30.943 [2024-11-17 18:56:17.026662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.026695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.026811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.026838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.026930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.026955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.027040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.027066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.027175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.027201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.027298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.027324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.027413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.027440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.027548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.027588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.027686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.027715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.027836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.027863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.027948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.027974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.028097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.028124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.028266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.028292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.028405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.028431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.028515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.028542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.028657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.028703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.028825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.028854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.028946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.028971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.029104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.029154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.029329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.029386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.029499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.029525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.029633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.029659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.029760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.029786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.029878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.029906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.029988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.030014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.030207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.030234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.030345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.030371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.030493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.030519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.030608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.030634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.030756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.030783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.030881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.030908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.031045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.031072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.031182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.031208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.031400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.944 [2024-11-17 18:56:17.031426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.944 qpair failed and we were unable to recover it. 00:35:30.944 [2024-11-17 18:56:17.031526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.031554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.031668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.031700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.031797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.031823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.031904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.031931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.032013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.032039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.032155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.032181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.032270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.032296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.032435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.032461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.032548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.032574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.032691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.032723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.032847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.032872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.032986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.033011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.033094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.033119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.033222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.033277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.033396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.033423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.033510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.033536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.033620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.033646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.033764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.033790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.033873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.033898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.033978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.034003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.034095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.034121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.034202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.034227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.034344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.034370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.034490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.034515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.034631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.034657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.034746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.034772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.034866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.034892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.034971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.034997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.035134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.035160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.035253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.035282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.035374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.035401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.035490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.035517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.035589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.035616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.035739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.035767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.035848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.035874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.036015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.945 [2024-11-17 18:56:17.036041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.945 qpair failed and we were unable to recover it. 00:35:30.945 [2024-11-17 18:56:17.036155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.036186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.036270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.036297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.036379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.036406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.036543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.036569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.036688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.036715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.036803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.036830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.036922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.036948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.037034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.037062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.037180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.037208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.037402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.037429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.037571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.037597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.037688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.037715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.037828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.037855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.037940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.037966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.038069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.038095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.038185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.038212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.038360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.038386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.038496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.038524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.038619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.038644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.038747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.038774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.038908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.038958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.039135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.039186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.039363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.039416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.039527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.039552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.039634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.039659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.039743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.039770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.039884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.039909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.039997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.040028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.040108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.040133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.040243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.040269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.040385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.040410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.040520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.040546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.040629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.040656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.040776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.040803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.040910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.040936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.041041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.041067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.041150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.041176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.041327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.946 [2024-11-17 18:56:17.041353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.946 qpair failed and we were unable to recover it. 00:35:30.946 [2024-11-17 18:56:17.041492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.041517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.041601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.041626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.041742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.041768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.041851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.041876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.041960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.041985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.042102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.042128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.042212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.042239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.042361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.042388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.042526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.042552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.042672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.042705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.042814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.042839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.042929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.042955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.043031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.043056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.043134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.043160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.043237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.043262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.043385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.043411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.043502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.043533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.043622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.043648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.043742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.043768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.043856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.043885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.043979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.044005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.044090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.044116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.044229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.044255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.044371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.044398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.044535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.044562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.044693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.044721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.044914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.044940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.045082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.045108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.045226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.045276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.045384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.045410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.045508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.045534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.045621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.045647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.045771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.045798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.045882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.045908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.046043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.046069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.046153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.046181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.046293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.046319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.046409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.947 [2024-11-17 18:56:17.046435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.947 qpair failed and we were unable to recover it. 00:35:30.947 [2024-11-17 18:56:17.046526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.046554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.046676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.046705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.046818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.046844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.046990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.047015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.047127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.047152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.047260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.047315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.047452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.047501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.047613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.047639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.047756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.047782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.047869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.047897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.048041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.048067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.048147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.048173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.048262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.048288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.048482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.048508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.048653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.048684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.048798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.048825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.048919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.048944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.049057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.049105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.049224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.049251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.049450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.049477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.049588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.049615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.049721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.049748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.049893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.049919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.050114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.050140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.050257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.050283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.050398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.050424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.050541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.050567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.050684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.050711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.050800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.050826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.050941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.050967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.051073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.051100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.051242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.051268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.051367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.051394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.051511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.051539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.051684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.051711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.051821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.051847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.051968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.051994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.948 [2024-11-17 18:56:17.052087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.948 [2024-11-17 18:56:17.052113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.948 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.052224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.052250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.052336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.052363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.052444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.052470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.052549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.052574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.052694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.052721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.052803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.052829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.052928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.052954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.053068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.053095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.053213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.053238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.053346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.053372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.053484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.053510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.053596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.053622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.053716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.053743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.053883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.053909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.054021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.054046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.054131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.054158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.054271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.054296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.054435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.054460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.054544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.054570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.054658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.054691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.054806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.054832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.054953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4d630 is same with the state(6) to be set 00:35:30.949 [2024-11-17 18:56:17.055173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.055212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.055325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.055352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.055441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.055466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.055559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.055586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.055706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.055734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.055828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.055855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.055935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.055985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.056123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.056178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.056342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.056382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.056532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.056560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.056644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.056670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.056769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.056796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.056912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.056937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.057076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.057102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.949 qpair failed and we were unable to recover it. 00:35:30.949 [2024-11-17 18:56:17.057197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.949 [2024-11-17 18:56:17.057223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.057309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.057338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.057452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.057478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.057567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.057593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.057684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.057711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.057827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.057854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.057946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.057971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.058055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.058081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.058193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.058218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.058323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.058349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.058464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.058517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.058680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.058729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.058933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.058977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.059095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.059122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.059303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.059351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.059470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.059520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.059625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.059689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.059784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.059814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.059925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.059951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.060069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.060095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.060184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.060211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.060324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.060350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.060477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.060516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.060635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.060663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.060806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.060832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.060976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.061002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.061125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.061151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.061234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.061260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.061349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.061376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.061483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.061508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.061647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.061680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.061823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.061848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.061939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.061964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.062043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.062068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.062157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.062183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.062296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.062322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.062461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.062487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.062572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.062597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.062698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.062729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.062845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.062877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.063002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.063029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.950 [2024-11-17 18:56:17.063109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.950 [2024-11-17 18:56:17.063135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.950 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.063245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.063270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.063352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.063377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.063475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.063503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.063592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.063617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.063737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.063764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.063840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.063865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.063977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.064004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.064082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.064108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.064226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.064252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.064331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.064356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.064446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.064472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.064589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.064614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.064709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.064736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.064868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.064907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.065005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.065033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.065117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.065144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.065262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.065289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.065389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.065428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.065551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.065578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.065696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.065723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.065862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.065888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.066005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.066031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.066119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.066146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.066236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.066261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.066379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.066409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.066495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.066522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.066632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.066660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.066759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.066786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.066925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.066979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.067132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.067187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.067320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.067360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.067468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.067494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.067612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.067638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.067763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.067790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.067915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.067943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.068057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.068083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.068168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.068194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.068333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.068359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.068487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.068513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.951 [2024-11-17 18:56:17.068658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.951 [2024-11-17 18:56:17.068701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.951 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.068814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.068840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.068932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.068959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.069067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.069093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.069178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.069203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.069301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.069329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.069417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.069442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.069541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.069580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.069699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.069738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.069852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.069877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.069975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.070001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.070085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.070112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.070242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.070282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.070403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.070453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.070593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.070619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.070727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.070760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.070879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.070907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.071039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.071090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.071269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.071317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.071415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.071453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.071583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.071609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.071696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.071722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.071817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.071843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.071952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.071977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.072117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.072143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.072248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.072273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.072387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.072413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.072542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.072581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.072684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.072714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.072858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.072885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.073048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.073088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.073224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.073274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.073418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.073475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.073599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.073627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.073752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.073779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.073894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.073920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.074002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.074028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.074133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.074187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.074320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.074358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.074462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.074492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.074586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.074612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.074703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.074742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.074833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.074861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.952 [2024-11-17 18:56:17.074947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.952 [2024-11-17 18:56:17.074974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.952 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.075060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.075087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.075186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.075225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.075318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.075346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.075461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.075488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.075572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.075598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.075718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.075757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.075878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.075906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.076009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.076036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.076138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.076192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.076282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.076308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.076429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.076455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.076543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.076570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.076659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.076691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.076777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.076805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.076918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.076944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.077034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.077059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.077176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.077201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.077294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.077323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.077418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.077444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.077528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.077555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.077638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.077666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.077763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.077789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.077895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.077936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.078060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.078088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.078178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.078205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.078328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.078354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.078467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.078494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.078624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.078664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.078776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.078803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.078918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.078944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.079126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.079192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.079456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.079509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.079596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.079622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.079770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.079798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.079914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.079940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.080038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.080094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.080314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.080369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.080483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.953 [2024-11-17 18:56:17.080509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.953 qpair failed and we were unable to recover it. 00:35:30.953 [2024-11-17 18:56:17.080608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.080647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.080864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.080903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.081051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.081079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.081232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.081290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.081458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.081512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.081704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.081731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.081847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.081873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.082014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.082040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.082138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.082165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.082245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.082271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.082407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.082433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.082536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.082563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.082648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.082685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.082812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.082837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.082957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.082983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.083116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.083166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.083311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.083360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.083443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.083469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.083610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.083635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.083742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.083770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.083886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.083911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.083995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.084021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.084104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.084129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.084237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.084264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.084345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.084375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.084497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.954 [2024-11-17 18:56:17.084537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.954 qpair failed and we were unable to recover it. 00:35:30.954 [2024-11-17 18:56:17.084671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.084734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.084868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.084907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.085054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.085081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.085231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.085257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.085374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.085400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.085544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.085572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.085704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.085744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.085837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.085865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.085987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.086016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.086158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.086185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.086274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.086301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.086395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.086424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.086552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.086580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.086725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.086752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.086872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.086898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.087009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.087035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.087147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.087173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.087295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.087321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.087486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.087525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.087644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.087671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.087825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.087852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.087973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.088001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.088114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.088140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.088247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.088273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.088488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.088538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.088655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.088693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.088788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.088814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.088898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.088924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.089040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.089066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.089218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.089284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.089396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.089451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.089593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.089619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.089745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.089772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.089861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.089889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.089984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.090011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.090168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.090209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.090331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.955 [2024-11-17 18:56:17.090372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.955 qpair failed and we were unable to recover it. 00:35:30.955 [2024-11-17 18:56:17.090524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.090551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.090633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.090659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.090802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.090829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.090923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.090949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.091134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.091174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.091392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.091431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.091555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.091595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.091753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.091780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.091898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.091925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.092044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.092086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.092262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.092302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.092514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.092553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.092746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.092773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.092860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.092888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.092980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.093007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.093131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.093158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.093277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.093317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.093446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.093493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.093647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.093697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.093814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.093841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.093937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.093964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.094079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.094105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.094265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.094313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.094531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.094571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.094750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.094777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.094889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.094916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.095042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.095069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.095175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.095201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.095340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.095387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.095581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.095622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.095780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.095807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.095918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.095946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.096048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.096074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.096222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.096248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.096360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.096386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.096519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.096546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.096662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.096698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.096790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.956 [2024-11-17 18:56:17.096816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.956 qpair failed and we were unable to recover it. 00:35:30.956 [2024-11-17 18:56:17.096943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.096969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.097058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.097084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.097300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.097342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.097518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.097568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.097689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.097721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.097837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.097864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.097983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.098009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.098097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.098124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.098242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.098283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.098449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.098490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.098621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.098661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.098805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.098832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.098953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.098979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.099121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.099175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.099308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.099358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.099507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.099561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.099700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.099727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.099819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.099846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.099940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.099967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.100052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.100080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.100262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.100301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.100490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.100530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.100686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.100713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.100858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.100885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.100975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.101001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.101083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.101111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.101247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.101286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.101422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.101472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.101633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.101672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.101842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.101869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.101962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.101993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.102088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.102115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.102273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.102313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.102445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.102488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.102647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.102698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.102870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.102897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.102982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.103009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.957 qpair failed and we were unable to recover it. 00:35:30.957 [2024-11-17 18:56:17.103097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.957 [2024-11-17 18:56:17.103124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.103237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.103277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.103454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.103494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.103622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.103662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.103814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.103841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.103955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.103982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.104064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.104090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.104253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.104292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.104500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.104540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.104722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.104749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.104864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.104891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.104975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.105001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.105084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.105110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.105204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.105258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.105394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.105420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.105511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.105537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.105683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.105730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.105864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.105904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.106099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.106139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.106302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.106341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.106487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.106527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.106701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.106741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.106896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.106935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.107097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.107137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.107270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.107311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.107468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.107508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.107683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.107724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.107857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.107896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.108026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.108066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.108239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.108278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.108432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.958 [2024-11-17 18:56:17.108472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.958 qpair failed and we were unable to recover it. 00:35:30.958 [2024-11-17 18:56:17.108630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.108669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.108863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.108904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.109062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.109109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.109287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.109327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.109523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.109589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.109785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.109826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.109980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.110020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.110185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.110225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.110388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.110427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.110637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.110739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.110879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.110921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.111097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.111138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.111272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.111314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.111482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.111524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.111720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.111763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.111903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.111947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.112124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.112167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.112292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.112333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.112545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.112587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.112782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.112823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.112950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.112989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.113179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.113219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.113381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.113423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.113585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.113624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.113766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.113808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.113975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.114014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.114172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.114212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.114343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.114382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.114622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.114718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.114855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.114895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.115033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.115073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.115261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.115300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.115455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.115520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.115671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.115723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.115883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.115924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.116084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.116124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.116277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.116317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.959 qpair failed and we were unable to recover it. 00:35:30.959 [2024-11-17 18:56:17.116478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.959 [2024-11-17 18:56:17.116518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.116651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.116713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.116893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.116934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.117070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.117112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.117313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.117355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.117525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.117578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.117740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.117783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.117956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.117998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.118164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.118205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.118368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.118410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.118579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.118621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.118759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.118803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.118944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.118987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.119156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.119197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.119363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.119406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.119585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.119651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.119878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.119920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.120121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.120162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.120329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.120372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.120525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.120567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.120705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.120747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.120945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.120986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.121150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.121191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.121364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.121406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.121576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.121617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.121811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.121856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.122027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.122069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.122209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.122253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.122396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.122440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.122636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.122731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.122934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.122976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.123151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.123196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.123359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.123403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.123572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.123616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.123773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.123817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.123964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.124006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.124210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.124253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.124464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.960 [2024-11-17 18:56:17.124506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.960 qpair failed and we were unable to recover it. 00:35:30.960 [2024-11-17 18:56:17.124655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.124711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.124880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.124925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.125058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.125102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.125239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.125282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.125466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.125509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.125696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.125741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.125879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.125923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.126096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.126148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.126353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.126397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.126568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.126613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.126766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.126812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.126994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.127038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.127220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.127264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.127438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.127482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.127639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.127696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.127892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.127936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.128083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.128129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.128337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.128380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.128565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.128609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.128807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.128853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.128989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.129035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.129240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.129285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.129457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.129501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.129700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.129746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.129897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.129941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.130099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.130146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.130316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.130360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.130534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.130578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.130769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.130815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.130954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.130997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.131136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.131179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.131371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.131416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.131554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.131597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.131798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.131843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.132007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.132053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.132190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.132233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.132409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.132455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.961 [2024-11-17 18:56:17.132617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.961 [2024-11-17 18:56:17.132662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.961 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.132862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.132906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.133072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.133115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.133288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.133332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.133508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.133554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.133695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.133741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.133958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.134003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.134138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.134182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.134366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.134410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.134580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.134631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.134844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.134895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.135048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.135092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.135292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.135335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.135470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.135513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.135649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.135715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.135868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.135911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.136065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.136109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.136246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.136289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.136502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.136545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.136690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.136744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.136919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.136963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.137136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.137180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.137364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.137408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.137549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.137592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.137767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.137812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.137984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.138027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.138213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.138257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.138430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.138473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.138655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.138710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.138906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.138950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.139100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.139145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.139292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.139337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.139553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.139598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.139744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.962 [2024-11-17 18:56:17.139788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.962 qpair failed and we were unable to recover it. 00:35:30.962 [2024-11-17 18:56:17.139995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.140039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.140213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.140257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.140459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.140503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.140691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.140744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.140887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.140934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.141105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.141151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.141288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.141337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.141499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.141546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.141710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.141758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.141912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.141967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.142150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.142197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.142406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.142452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.142664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.142720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.142893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.142939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.143075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.143121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.143296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.143342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.143540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.143594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.143773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.143821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.143980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.144027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.144161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.144207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.144395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.144441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.144634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.144691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.144930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.144977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.145170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.145217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.145362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.145408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.145574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.145657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.145859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.145906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.146101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.146148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.146336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.146382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.146555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.146602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.146817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.146866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.147012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.147058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.147246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.147293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.147470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.147529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.147726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.147773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.147966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.148012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.148166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.148212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.963 [2024-11-17 18:56:17.148387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.963 [2024-11-17 18:56:17.148433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.963 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.148592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.148638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.148844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.148893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.149076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.149126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.149340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.149385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.149565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.149610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.149843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.149894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.150099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.150147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.150286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.150335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.150498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.150547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.150706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.150757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.150939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.150988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.151146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.151197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.151342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.151391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.151591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.151638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.151862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.151909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.152125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.152174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.152373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.152420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.152621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.152688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.152867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.152921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.153103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.153149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.153333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.153380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.153565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.153611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.153782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.153830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.154005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.154052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.154240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.154287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.154486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.154543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.154741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.154789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.154968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.155014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.155201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.155248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.155401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.155447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.155619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.155670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.155904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.155952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.156104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.156151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.156331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.156378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.156576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.156625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.156856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.156902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.157044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.964 [2024-11-17 18:56:17.157090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.964 qpair failed and we were unable to recover it. 00:35:30.964 [2024-11-17 18:56:17.157245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.157294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.157491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.157541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.157734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.157782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.158000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.158046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.158234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.158282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.158475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.158521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.158705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.158752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.158922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.158971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.159162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.159209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.159393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.159442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.159707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.159774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.160001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.160072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.160285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.160349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.160507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.160553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.161279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.161332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.161572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.161620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.161833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.161881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.162035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.162096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.162347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.162430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.162583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.162630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.162812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.162856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.163046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.163093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.163263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.163309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.163499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.163543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.163716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.163763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.163909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.163954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.164171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.164216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.164404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.164450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.164638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.164699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.164926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.164973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.165159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.165221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.165377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.165423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.165640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.165722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.165888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.165954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.166147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.166192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.166339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.166382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.166537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.166605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.166782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.965 [2024-11-17 18:56:17.166827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.965 qpair failed and we were unable to recover it. 00:35:30.965 [2024-11-17 18:56:17.166999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.167045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.167224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.167270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.167448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.167493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.167647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.167708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.167875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.167923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.168133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.168178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.168321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.168364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.168520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.168568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.168772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.168843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.169007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.169073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.169216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.169270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.169469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.169515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.169655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.169711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.169892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.169948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.170116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.170188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.170386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.170436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.170650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.170712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.170909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.170956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.171089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.171136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.171449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.171496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.171636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.171691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.171881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.171927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.172145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.172194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.172401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.172464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.172654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.172719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.172887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.172937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.173169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.173221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.173437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.173502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.173771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.173819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.173997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.174044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.174256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.174302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.174480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.174531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.174733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.174780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.174937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.174984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.175180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.175234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.175440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.175492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.175747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.175796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.966 [2024-11-17 18:56:17.175981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.966 [2024-11-17 18:56:17.176028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.966 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.176172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.176242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.176455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.176510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.176696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.176745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.176925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.176971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.177199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.177251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.177442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.177493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.177662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.177726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.177892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.177939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.178083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.178150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.178341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.178411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.178613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.178666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.178855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.178901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.179079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.179141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.179399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.179452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.179692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.179744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.179904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.179957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.180161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.180214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.180413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.180467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.180643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.180735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.180971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.181019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.181219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.181266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.181437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.181494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.181732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.181781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.181946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.181995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.182235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.182289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.182490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.182542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.182780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.182828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.183033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.183080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.183295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.183348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.183526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.183601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.183831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.183879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.184060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.184132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.184315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.967 [2024-11-17 18:56:17.184365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.967 qpair failed and we were unable to recover it. 00:35:30.967 [2024-11-17 18:56:17.184559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.184608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.184788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.184845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.185060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.185127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.185349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.185418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.185606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.185653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.185839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.185886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.186041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.186088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.186228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.186261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.186447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.186481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.186622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.186655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.186857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.186903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.187062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.187129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.187313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.187359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.187537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.187592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.187711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.187745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.187868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.187902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.188041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.188074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.188306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.188352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.188482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.188529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.188697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.188764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.188915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.188948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.189146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.189190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.189387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.189432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.189632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.189664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.189820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.189854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.190055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.190090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.190231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.190264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.190394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.190427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.190596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.190642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.190790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.190824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.190966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.190999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.191169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.191213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.191367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.191415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.191625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.191664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.191822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.191856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.191976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.192008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.192158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.192191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.192324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.968 [2024-11-17 18:56:17.192370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.968 qpair failed and we were unable to recover it. 00:35:30.968 [2024-11-17 18:56:17.192518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.192563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.192759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.192793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.192902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.192935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.193120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.193165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.193389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.193435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.193593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.193626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.193778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.193811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.193986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.194039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.194204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.194237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.194352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.194385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.194500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.194532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.194642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.194681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.194783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.194815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.194962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.194995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.195107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.195164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.195384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.195417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.195572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.195605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.195709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.195745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.195858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.195892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.196032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.196083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.196234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.196280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.196479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.196546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.196724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.196769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.196916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.196951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.197177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.197230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.197365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.197401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.197569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.197604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.197726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.197761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.197879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.197915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.198103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.198139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.198276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.198329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.198540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.198590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.198761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.198797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.198906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.198939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.199083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.199117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.199270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.199303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.199460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.199496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.969 qpair failed and we were unable to recover it. 00:35:30.969 [2024-11-17 18:56:17.199643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.969 [2024-11-17 18:56:17.199684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.199834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.199867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.199999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.200032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.200159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.200210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.200335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.200370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.200484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.200519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.200662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.200728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.200869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.200902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.201063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.201097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.201202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.201253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.201367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.201404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.201554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.201590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.201735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.201785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.201910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.201943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.202140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.202175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.202301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.202338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.202492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.202528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.202660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.202704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.202814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.202850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.203030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.203065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.203200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.203250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.203425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.203484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.203689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.203724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.203856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.203890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.204023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.204056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.204176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.204210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.204333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.204368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.204536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.204571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.204713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.204747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.204962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.204996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.205142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.205175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.205323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.205356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.205502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.205535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.205647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.205687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.205797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.205831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.205971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.206005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.206242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.206287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.206456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.206506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.970 qpair failed and we were unable to recover it. 00:35:30.970 [2024-11-17 18:56:17.206751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.970 [2024-11-17 18:56:17.206787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.206946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.206980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.207152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.207185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.207291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.207326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.207534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.207567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.207742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.207777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.207925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.207982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.208138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.208185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.208344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.208396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.208595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.208640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.208836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.208883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.209023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.209068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.209242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.209286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.209466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.209510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.209697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.209752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.209892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.209936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.210078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.210126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.210303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.210347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.210497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.210533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.210654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.210698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.210875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.210912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.211027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.211062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.211266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.211310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.211495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.211539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.211726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.211763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.211891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.211927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.212076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.212111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.212224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.212261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.212445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.212495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.212685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.212737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.212926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.212973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.213157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.213203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.213393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.213438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.213612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.213656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.213816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.213861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.214042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.214088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.214248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.214295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.214518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.214555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.971 [2024-11-17 18:56:17.214666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.971 [2024-11-17 18:56:17.214712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.971 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.214846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.214882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.215041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.215079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.215207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.215244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.215367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.215403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.215527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.215563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.215691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.215747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.215929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.215965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.216148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.216186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.216361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.216422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.216602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.216648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.216799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.216846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.216978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.217024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.217196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.217240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.217390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.217434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.217650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.217723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.217895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.217938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.218079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.218134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.218307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.218352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.218522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.218565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.218753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.218790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.218913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.218951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.219133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.219171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.219413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.219467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.219666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.219717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.219873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.219913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.220409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.220451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.220585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.220623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.220760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.220802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.220959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.220997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.221157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.221195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.221364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.221421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.221559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.221596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.221749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.972 [2024-11-17 18:56:17.221788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.972 qpair failed and we were unable to recover it. 00:35:30.972 [2024-11-17 18:56:17.221908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.221948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.222143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.222181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.222296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.222334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.222527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.222565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.222697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.222736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.222894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.222932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.223072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.223128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.223289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.223326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.223472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.223510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.223670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.223715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.223922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.223989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.224161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.224212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.224378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.224416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.224580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.224616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.224736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.224797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.225004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.225072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.225300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.225386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.225610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.225648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.225781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.225817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.225961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.225996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.226119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.226175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.226324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.226370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.226557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.226604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.226774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.226812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.227001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.227046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.227179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.227224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.227406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.227452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.227656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.227702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.227830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.227864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.228061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.228096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.228251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.228286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.228436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.228472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.228613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.228648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.228809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.228843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.228983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.229029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.229174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.229240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.229454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.229501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.229687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.973 [2024-11-17 18:56:17.229723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.973 qpair failed and we were unable to recover it. 00:35:30.973 [2024-11-17 18:56:17.229859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.229894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.230006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.230064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.230279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.230325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.230476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.230523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.230691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.230749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.230869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.230904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.231055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.231090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.231244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.231290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.231432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.231489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.231647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.231691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.231846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.231882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.232042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.232089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.232275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.232310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.232437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.232473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.232606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.232641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.232762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.232797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.232977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.233011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.233162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.233209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.233451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.233496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.233710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.233747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.233898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.233933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.234051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.234087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.234232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.234266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.234453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.234489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.234654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.234713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.234884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.234919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.235038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.235104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.235247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.235292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.235478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.235523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.235734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.235805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.235953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.235993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.236143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.236190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.236354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.236398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.236559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.236594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.236782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.236817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.236952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.236989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.237132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.237167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.237384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.974 [2024-11-17 18:56:17.237439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.974 qpair failed and we were unable to recover it. 00:35:30.974 [2024-11-17 18:56:17.237638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.237722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.237881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.237915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.238033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.238092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.238312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.238399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.238609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.238661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.238880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.238916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.239036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.239071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.239242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.239312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.239481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.239514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.239688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.239723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.239891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.239927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.240044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.240101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.240287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.240332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.240524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.240569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.240741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.240790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.240978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.241025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.241208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.241258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.241523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.241569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.241712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.241785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.241973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.242019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.242232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.242289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.242469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.242512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.242634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.242683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.242850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.242891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.243021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.243086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.243253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.243293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.243449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.243509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.243710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.243750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.243873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.243921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.244053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.244092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.244243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.244282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.244448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.244488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.244687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.244728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.244960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.245000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.245142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.245188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.245368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.245432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.245570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.245611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.245788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.975 [2024-11-17 18:56:17.245828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.975 qpair failed and we were unable to recover it. 00:35:30.975 [2024-11-17 18:56:17.246020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.246059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.246211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.246253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.246400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.246439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.246595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.246635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.246833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.246869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.247008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.247043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.247158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.247193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.247334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.247367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.247514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.247576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.247757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.247798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.247927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.247967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.248135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.248168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.248310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.248343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.248484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.248516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.248631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.248664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.248795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.248831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.248947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.248982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.249117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.249168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.249356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.249392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.249608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.249653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.249851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.249886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.250004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.250040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.250177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.250211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.250322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.250356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.250475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.250510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.250664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.250734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.250886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.250916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.251013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.251044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.251177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.251207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.251312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.251341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.251438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.251503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.251718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.251771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.251935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.251970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.252078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.976 [2024-11-17 18:56:17.252111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.976 qpair failed and we were unable to recover it. 00:35:30.976 [2024-11-17 18:56:17.252252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.252287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.252426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.252495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.252657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.252697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.252845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.252898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.253081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.253117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.253302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.253340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.253482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.253530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.253634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.253664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.253808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.253841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.253944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.253975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.254135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.254171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.254340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.254371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.254472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.254503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.254658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.254694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.254801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.254831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.254931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.254962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.255121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.255155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.255327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.255363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.255487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.255519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.255623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.255653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.255801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.255832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.255938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.255968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.256101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.256131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.256264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.256295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.256411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.256443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.256614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.256645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.256753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.256783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.256947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.256977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.257120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.257152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.257249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.257280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.257403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.257450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.257567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.257598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.257705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.257736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.257832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.257863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.258022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.258052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.258153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.258203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.258344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.258382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.258562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.258600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.977 qpair failed and we were unable to recover it. 00:35:30.977 [2024-11-17 18:56:17.258772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.977 [2024-11-17 18:56:17.258802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.258900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.258930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.259053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.259084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.259176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.259206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.259308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.259338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.259476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.259506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.259607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.259638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.259766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.259814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.259946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.259980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.260084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.260116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.260245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.260276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.260366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.260398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.260508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.260540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.260641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.260680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.260787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.260817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.260922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.260953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.261047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.261078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.261206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.261236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.261371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.261403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.261533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.261563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.261703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.261735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.261864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.261895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.262035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.262069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.262233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.262286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.262446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.262477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.262612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.262643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.262765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.262798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.262941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.262973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.263073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.263105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.263201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.263231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.263336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.263368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.263473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.263503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.263631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.263662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.263778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.263809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.263912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.263942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.264072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.264103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.264204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.264251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.264477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.264526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.978 [2024-11-17 18:56:17.264743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.978 [2024-11-17 18:56:17.264800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.978 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.264958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.264996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.265191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.265240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.265403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.265452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.265648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.265736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.265937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.266005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.266160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.266209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.266408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.266457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.266633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.266669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.266787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.266823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.266979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.267016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.267144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.267180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.267359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.267408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.267598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.267634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.267811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.267861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.268055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.268092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.268216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.268254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.268403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.268440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.268592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.268628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.268764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.268803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.269012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.269062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.269239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.269290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.269439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.269487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.269697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.269759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.269897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.269943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.270146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.270195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.270377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.270426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.270622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.270667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.270837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.270881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.271134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.271204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.271444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.271492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.271741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.271789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.271975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.272021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.272194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.272242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.272430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.272478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.272632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.272689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.272895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.272940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.273125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.273171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.273342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.979 [2024-11-17 18:56:17.273389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.979 qpair failed and we were unable to recover it. 00:35:30.979 [2024-11-17 18:56:17.273563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.273611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.273837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.273883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.274038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.274085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.274350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.274401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.274635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.274713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.274901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.274949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.275197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.275243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.275451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.275499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.275648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.275726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.275933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.275981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.276209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.276257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.276444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.276493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.276662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.276742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.276900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.276949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.277099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.277146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.277323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.277370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.277556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.277603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.277774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.277822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.277991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.278040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.278237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.278285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.278434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.278482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.278665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.278728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.278959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.279007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.279199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.279247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.279442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.279490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.279694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.279743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.279899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.279947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.280111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.280158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.280343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.280391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.280554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.280603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.280811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.280861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.281027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.281082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.281241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.281288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.281442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.281489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.281700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.281750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.281976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.282024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.282226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.282275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.282464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.282512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.980 qpair failed and we were unable to recover it. 00:35:30.980 [2024-11-17 18:56:17.282671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.980 [2024-11-17 18:56:17.282730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.282876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.282942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.283159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.283211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.283371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.283420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.283580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.283631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.283938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.284014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.284226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.284278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.284520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.284570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.284799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.284850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.285118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.285169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.285396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.285447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.285628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.285722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.285891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.285940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.286176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.286224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.286421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.286472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.286637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.286698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.286861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.286909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.287058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.287105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.287287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.287337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.287534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.287584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.287786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.287853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.288068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.288120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.288299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.288353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.288524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.288577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.288787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.288842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.289049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.289101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.289340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.289391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.289568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.289620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.289820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.289874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.290047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.290101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.290307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.290358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.290517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.290568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.290812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.290865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.291039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.291089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.981 qpair failed and we were unable to recover it. 00:35:30.981 [2024-11-17 18:56:17.291256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.981 [2024-11-17 18:56:17.291307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.291462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.291512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.291751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.291804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.292012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.292064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.292267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.292318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.292561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.292613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.292827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.292881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.293052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.293104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.293263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.293316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.293488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.293540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.293771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.293824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.294015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.294067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.294282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.294334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.294494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.294553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.294741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.294796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.295005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.295058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.295269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.295321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.295518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.295570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.295735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.295787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.295984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.296034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.296184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.296236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.296428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.296479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.296697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.296750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.296919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.296969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.297167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.297218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.297417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.297468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.297670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.297736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.297939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.297991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.298145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.298198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.298383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.298434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.298652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.298719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.298955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.299008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.299215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.299269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.299459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.299511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.299709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.299762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.299957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.300009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.300171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.300223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.300413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.300464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.982 [2024-11-17 18:56:17.300638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.982 [2024-11-17 18:56:17.300701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.982 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.300907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.300958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.301145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.301195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.301440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.301492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.301697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.301750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.301921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.301972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.302142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.302191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.302351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.302406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.302580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.302632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.302839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.302891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.303103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.303154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.303397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.303450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.303643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.303709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.303886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.303949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.304132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.304184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.304385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.304439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.304657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.304736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.304993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.305050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.305227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.305283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.305535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.305590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.305799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.305856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.306040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.306098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.306347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.306402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.306589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.306641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.306869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.306921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.307125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.307176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.307345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.307396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.307618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.307689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.307951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.308006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.308199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.308254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.308443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.308499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.308724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.308777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.308979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.309030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.309277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.309329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.309508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.309567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.309771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.309827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.310028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.310080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.310262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.310314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.310520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.983 [2024-11-17 18:56:17.310572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.983 qpair failed and we were unable to recover it. 00:35:30.983 [2024-11-17 18:56:17.310759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.310812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.310972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.311023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.311227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.311279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.311483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.311534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.311753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.311806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.312009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.312061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.312265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.312316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.312549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.312601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.312879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.312941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.313139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.313217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.313518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.313570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.313739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.313801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.314087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.314146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.314383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.314443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.314667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.314732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.314914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.314967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.315182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.315239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.315450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.315505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.315707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.315764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.315988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.316044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.316269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.316324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.316546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.316603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.316825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.316881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.317085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.317140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.317331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.317387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.317616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.317688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.317856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.317912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.318074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.318131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.318330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.318386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.318612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.318668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.318911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.318967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.319220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.319275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.319451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.319507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.319702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.319759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.319971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.320026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.320203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.320259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.320433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.320487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.984 qpair failed and we were unable to recover it. 00:35:30.984 [2024-11-17 18:56:17.320656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.984 [2024-11-17 18:56:17.320729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.320902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.320958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.321171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.321226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.321458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.321513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.321740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.321798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.321986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.322043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.322229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.322285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.322458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.322514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.322720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.322785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.323037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.323093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.323269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.323325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.323581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.323636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.323884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.323940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.324119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.324175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.324333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.324388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.324561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.324616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.324848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.324905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.325156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.325212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.325432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.325487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.325651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.325760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.325965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.326025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.326252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.326312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.326604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.326663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.326905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.326965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.327221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.327277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.327476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.327530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.327755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.327812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.327983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.328058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.328250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.328310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.328516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.328575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.328852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.328913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.329151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.329205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.329400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.329455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.329671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.329740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.330016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.985 [2024-11-17 18:56:17.330072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.985 qpair failed and we were unable to recover it. 00:35:30.985 [2024-11-17 18:56:17.330260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.330316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.330539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.330596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.330856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.330913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.331091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.331147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.331333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.331389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.331640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.331710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.331926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.331981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.332167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.332222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.332397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.332452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.332664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.332735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.332961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.333017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.333227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.333283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.333546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.333603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.333814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.333874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.334096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.334160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.334367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.334422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.334644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.334722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.334992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.335052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.335235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.335294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.335563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.335623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.335912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.335973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.336157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.336216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.336480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.336539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.336741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.336804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.337070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.337129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.337379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.337440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.337699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.337760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.338000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.338059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.338295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.338355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.338634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.338708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.338913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.338973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.339176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.339236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.339459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.339518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.339754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.339816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.340045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.340107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.340386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.340446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.340645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.340722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.986 [2024-11-17 18:56:17.340917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.986 [2024-11-17 18:56:17.340977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.986 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.341210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.341269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.341474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.341534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.341776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.341838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.342074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.342150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.342366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.342425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.342700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.342761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.342989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.343050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.343285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.343344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.343563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.343623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.343846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.343908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.344139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.344199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.344383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.344442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.344633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.344707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.344929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.344989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.345199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.345259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.345447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.345507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.345737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.345799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.346032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.346093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.346365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.346425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.346612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.346671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.346887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.346947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.347176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.347235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.347466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.347525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.347723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.347784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.348016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.348075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.348260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.348321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.348564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.348627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.348865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.348926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.349192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.349252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.349464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.349523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.349714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.349777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.349970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.350030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.350253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.350312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.350489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.350552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.350838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.350900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.351129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.351188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.351414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.351473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.351649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.987 [2024-11-17 18:56:17.351721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.987 qpair failed and we were unable to recover it. 00:35:30.987 [2024-11-17 18:56:17.351928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.351988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.352182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.352242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.352436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.352495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.352697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.352759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.352955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.353014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.353252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.353311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.353551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.353620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.353923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.354016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.354330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.354395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.354628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.354715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.354963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.355025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.355285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.355349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.355576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.355637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.355940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.356003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.356238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.356298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.356534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.356594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.356860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.356943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.357154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.357237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.357511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.357571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.357812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.357873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.358109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.358168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.358365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.358424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.358618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.358695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.358943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.359003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.359272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.359332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.359576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.359635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.359852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.359912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.360179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.360239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.360506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.360565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.360849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.360911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.361164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.361242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.361466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.361525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.361797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.361859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.362158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.362229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.362470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.362529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.362732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.362793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.362990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.363053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.363267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.363327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.988 qpair failed and we were unable to recover it. 00:35:30.988 [2024-11-17 18:56:17.363592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.988 [2024-11-17 18:56:17.363653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.363889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.363950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.364180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.364243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.364477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.364538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.364804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.364864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.365059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.365119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.365339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.365399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.365582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.365641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.365867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.365927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.366206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.366267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.366508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.366567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.366835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.366897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.367146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.367206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.367389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.367451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.367700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.367767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.367972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.368032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.368303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.368383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.368577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.368637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.368886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.368945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.369211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.369286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.369563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.369622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.369875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.369954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.370221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.370307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.370546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.370608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.370880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.370958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.371245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.371324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.371510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.371571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.371790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.371868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.372120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.372198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.372398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.372458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.372768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.372829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.373036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.373095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.373275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.373334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.373547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.373605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.373812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.373872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.374098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.374157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.374363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.374422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.374669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.374740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.989 [2024-11-17 18:56:17.375011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.989 [2024-11-17 18:56:17.375072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.989 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.375258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.375317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.375538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.375597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.375819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.375879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.376080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.376139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.376332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.376391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.376662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.376738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.377010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.377072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.377287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.377368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.377558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.377617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.377926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.377987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.378251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.378310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.378544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.378604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.378836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.378898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.379130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.379188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.379385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.379465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.379736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.379799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.380016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.380075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.380315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.380394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.380582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.380644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.380887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.380948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.381248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.381307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.381527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.381587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.381856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.381934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.382140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.382225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.382492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.382562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.382818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.382900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.383138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.383218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.383465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.383526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.383793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.383874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.384146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.384209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.384459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.384518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.384744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.384824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.385058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.990 [2024-11-17 18:56:17.385118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.990 qpair failed and we were unable to recover it. 00:35:30.990 [2024-11-17 18:56:17.385294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.385353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.385599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.385659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.385942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.386020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.386268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.386347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.386576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.386638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.386943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.387004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.387278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.387356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.387546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.387605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.387878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.387939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.388200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.388279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.388480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.388539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.388773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.388835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.389111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.389190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.389421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.389480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.389694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.389757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.390027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.390105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.390366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.390443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.390637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.390712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.390963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.391041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.391317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.391393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.391629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.391698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.391955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.392034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.392226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.392285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.392491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.392550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.392770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.392850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.393070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.393148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.393409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.393469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.393643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.393732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.394001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.394081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.394392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.394469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.394671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.394744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.394982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.395061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.395373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.395442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.395689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.395750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.395965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.396050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.396311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.396390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.396660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.396738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.397009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.991 [2024-11-17 18:56:17.397086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.991 qpair failed and we were unable to recover it. 00:35:30.991 [2024-11-17 18:56:17.397376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.397455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.397730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.397791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.398009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.398088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.398343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.398419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.398638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.398708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.398945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.399005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.399191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.399250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.399513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.399589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.399904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.399964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.400161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.400239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.400467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.400542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.400765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.400846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.401066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.401144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.401391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.401469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.401751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.401813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.402027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.402105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.402367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.402425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.402662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.402732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.403048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.403108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.403353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.403428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.403652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.403723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.403976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.404072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.404362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.404423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.404703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.404765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.405028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.405087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.405349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.405409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.405611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.405670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.405923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.406002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.406226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.406307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.406523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.406582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.406901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.406980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.407296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.407373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.407640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.407718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.407974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.408051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.408342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.408419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.408704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.408767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.409006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.409068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.409335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.409413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.992 [2024-11-17 18:56:17.409696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.992 [2024-11-17 18:56:17.409758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.992 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.409988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.410048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.410262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.410343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.410561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.410621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.410873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.410933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.411158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.411235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.411437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.411499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.411732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.411793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.411982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.412042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.412271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.412331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.412552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.412611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.412877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.412937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.413178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.413238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.413462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.413522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.413721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.413783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.413995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.414072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.414338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.414416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.414696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.414757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.414983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.415041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.415240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.415318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.415520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.415580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.415799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.415879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.416105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.416165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.416434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.416494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.416723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.416794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.417030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.417090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.417326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.417386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.417575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.417634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.417977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.418037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.418325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.418384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.418567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.418628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.418878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.418956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.419200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.419278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.419540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.419599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.419875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.419935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.420158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.420218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.420490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.420549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.420785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.420865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.993 [2024-11-17 18:56:17.421135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.993 [2024-11-17 18:56:17.421213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.993 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.421437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.421495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.421791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.421871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.422118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.422177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.422405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.422467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.422699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.422760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.423059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.423137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.423436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.423515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.423698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.423758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.423977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.424037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.424240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.424300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.424494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.424555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.424757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.424820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.425114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.425203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.425426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.425485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.425689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.425750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.425978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.426038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.426326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.426402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.426637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.426707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.426973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.427052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.427310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.427388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.427630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.427702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.427909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.427969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.428232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.428310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.428538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.428598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.428855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.428934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.429251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.429312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.429576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.429635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.429917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.429995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.430262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.430341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.430578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.430636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.430875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.430954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.431209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.431270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.431466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.431526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.431775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.431855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.432096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.432172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.432447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.432524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.432802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.432881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.994 qpair failed and we were unable to recover it. 00:35:30.994 [2024-11-17 18:56:17.433135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.994 [2024-11-17 18:56:17.433211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.433386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.433445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.433655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.433726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.434008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.434071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.434361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.434420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.434663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.434734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.435029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.435106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.435374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.435451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.435699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.435760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.436025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.436103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.436297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.436376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.436572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.436632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.436895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.436972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.437268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.437345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.437577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.437636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.437853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.437914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.438209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.438296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.438485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.438545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.438820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.438900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.439141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.439220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.439466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.439525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.439748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.439830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.440086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.440163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.440436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.440495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.440758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.440837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.441083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.441141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.441374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.441453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.441748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.441827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.442084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.442162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.442428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.442505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.442771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.442851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.443073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.443150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.443378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.443438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.443667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.443738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.443947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.995 [2024-11-17 18:56:17.444007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.995 qpair failed and we were unable to recover it. 00:35:30.995 [2024-11-17 18:56:17.444192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.444252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.444518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.444578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.444863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.444923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.445195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.445255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.445511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.445571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.445773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.445833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.446065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.446125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.446316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.446375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.446607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.446693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.446945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.447024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.447252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.447332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.447563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.447622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.447927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.447988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.448251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.448327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.448508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.448567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.448786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.448866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.449080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.449161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.449433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.449493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.449733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.449813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.450026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.450102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.450294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.450353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.450614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.450686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.450930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.450990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.451186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.451245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.451470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.451529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:30.996 [2024-11-17 18:56:17.451758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.996 [2024-11-17 18:56:17.451840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:30.996 qpair failed and we were unable to recover it. 00:35:31.274 [2024-11-17 18:56:17.452027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.274 [2024-11-17 18:56:17.452087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.274 qpair failed and we were unable to recover it. 00:35:31.274 [2024-11-17 18:56:17.452319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.274 [2024-11-17 18:56:17.452377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.452600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.452659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.452865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.452925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.453200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.453259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.453483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.453542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.453779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.453842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.454101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.454160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.454382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.454442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.454670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.454758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.455011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.455088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.455372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.455432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.455630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.455706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.455920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.455980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.456209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.456268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.456488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.456546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.456757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.456818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.457048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.457107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.457293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.457352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.457540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.457599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.457889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.457951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.458164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.458222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.458387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.458445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.458646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.458743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.458949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.459008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.459201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.459261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.459499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.459559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.459751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.459812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.460012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.460071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.460299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.460359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.460579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.460638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.460870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.460930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.461103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.461164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.461400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.461463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.461665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.461741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.461931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.461990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.462183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.462242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.275 qpair failed and we were unable to recover it. 00:35:31.275 [2024-11-17 18:56:17.462448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.275 [2024-11-17 18:56:17.462508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.462730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.462791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.463027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.463088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.463285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.463345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.463552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.463614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.463828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.463891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.464132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.464191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.464430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.464489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.464738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.464820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.465010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.465097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.465348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.465408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.465585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.465645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.465881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.465941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.466201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.466276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.466531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.466592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.466863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.466941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.467163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.467223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.467468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.467528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.467724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.467785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.468039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.468117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.468358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.468418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.468695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.468767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.469030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.469108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.469402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.469480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.469708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.469770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.470039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.470124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.470434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.470493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.470810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.470872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.471154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.471231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.471465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.471524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.471721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.471787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.472046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.472130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.472345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.472422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.472637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.472707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.472925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.473004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.473322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.276 [2024-11-17 18:56:17.473380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.276 qpair failed and we were unable to recover it. 00:35:31.276 [2024-11-17 18:56:17.473553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.473613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.473888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.473970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.474236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.474312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.474537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.474598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.474874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.474954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.475179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.475259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.475449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.475508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.475732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.475793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.476066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.476144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.476335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.476394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.476624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.476696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.476912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.476992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.477230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.477289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.477513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.477572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.477853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.477931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.478151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.478231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.478469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.478528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.478767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.478849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.479103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.479172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.479367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.479428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.479620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.479696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.479953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.480035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.480343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.480402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.480634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.480711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.480964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.481042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.481293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.481369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.481599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.481658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.481928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.481986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.482243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.482320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.482596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.482656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.482913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.482974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.483235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.483315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.483564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.483642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.483929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.484012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.484306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.277 [2024-11-17 18:56:17.484383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.277 qpair failed and we were unable to recover it. 00:35:31.277 [2024-11-17 18:56:17.484593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.484653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.484916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.484977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.485270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.485347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.485563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.485623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.485897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.485976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.486177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.486260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.486471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.486532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.486802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.486882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.487137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.487215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.487487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.487548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.487844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.487924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.488147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.488225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.488421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.488481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.488693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.488756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.488995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.489073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.489377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.489455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.489689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.489751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.490038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.490117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.490373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.490434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.490668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.490762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.491047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.491108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.491365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.491443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.491695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.491756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.491959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.492039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.492312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.492392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.492628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.492706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.492920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.493003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.493266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.493347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.493553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.493612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.493849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.493927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.494192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.494272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.494453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.494512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.494789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.494869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.495113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.495174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.495459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.495538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.278 [2024-11-17 18:56:17.495807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.278 [2024-11-17 18:56:17.495886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.278 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.496149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.496228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.496498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.496558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.496834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.496914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.497179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.497258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.497526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.497585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.497848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.497928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.498212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.498291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.498528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.498588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.498857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.498936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.499191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.499270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.499504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.499566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.499878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.499959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.500246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.500324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.500565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.500625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.500902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.500982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.501240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.501329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.501534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.501594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.501848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.501927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.502146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.502225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.502459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.502520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.502810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.502890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.503135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.503213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.503447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.503507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.503755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.503834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.504105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.504165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.504391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.504451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.504701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.504762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.505034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.505111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.505366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.505446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.505704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.505766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.279 [2024-11-17 18:56:17.506011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.279 [2024-11-17 18:56:17.506089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.279 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.506330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.506411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.506639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.506734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.506963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.507042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.507337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.507414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.507636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.507726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.508025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.508103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.508314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.508393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.508661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.508739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.508956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.509037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.509253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.509331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.509603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.509663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.509906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.509986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.510261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.510340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.510518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.510578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.510864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.510926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.511187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.511265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.511458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.511518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.511777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.511859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.512154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.512232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.512475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.512535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.512771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.512835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.513133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.513212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.513443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.513504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.513755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.513835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.514096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.514173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.514402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.514471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.514744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.514805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.515054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.515133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.515404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.515463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.515701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.515780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.516031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.516110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.516358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.516439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.516724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.516785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.517037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.517117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.517386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.517464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.517702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.517779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.518036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.518117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.280 [2024-11-17 18:56:17.518339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.280 [2024-11-17 18:56:17.518417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.280 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.518617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.518688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.518982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.519061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.519359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.519437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.519716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.519777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.520014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.520093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.520386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.520465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.520752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.520815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.521104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.521183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.521429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.521509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.521779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.521859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.522122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.522200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.522467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.522528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.522727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.522788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.523024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.523100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.523352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.523440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.523730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.523811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.524111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.524190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.524487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.524564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.524865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.524944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.525208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.525285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.525507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.525567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.525813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.525891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.526167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.526245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.526511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.526571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.526844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.526923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.527213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.527290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.527570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.527630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.527901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.527980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.528289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.528367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.528616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.528702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.528975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.529053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.529308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.529389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.529666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.529746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.529994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.530072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.530368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.530446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.530714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.530777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.531028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.531105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.281 qpair failed and we were unable to recover it. 00:35:31.281 [2024-11-17 18:56:17.531401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.281 [2024-11-17 18:56:17.531479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.531721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.531784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.532058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.532135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.532432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.532509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.532794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.532857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.533177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.533255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.533501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.533561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.533829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.533908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.534211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.534288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.534530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.534589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.534830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.534909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.535154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.535231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.535502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.535562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.535787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.535867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.536184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.536262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.536534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.536594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.536886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.536966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.537262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.537340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.537628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.537715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.538047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.538126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.538428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.538505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.538711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.538774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.539017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.539095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.539396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.539472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.539691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.539753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.539927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.539987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.540248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.540325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.540591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.540650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.540977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.541055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.541351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.541431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.541691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.541754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.542021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.542099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.542372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.542450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.542652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.542733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.543011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.543091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.543381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.543458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.282 [2024-11-17 18:56:17.543738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.282 [2024-11-17 18:56:17.543803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.282 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.544117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.544194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.544418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.544496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.544750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.544833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.545136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.545214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.545474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.545551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.545806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.545886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.546145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.546223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.546507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.546567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.546859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.546948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.547215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.547293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.547531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.547590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.547865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.547945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.548243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.548320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.548545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.548605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.548911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.548991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.549287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.549365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.549640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.549720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.550021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.550098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.550340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.550416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.550626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.550703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.550934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.551015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.551284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.551361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.551612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.551691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.551938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.552017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.552269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.552351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.552590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.552651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.552964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.553044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.553335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.553413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.553705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.553767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.554038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.554117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.554411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.554490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.554735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.554798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.555055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.555135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.555437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.283 [2024-11-17 18:56:17.555516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.283 qpair failed and we were unable to recover it. 00:35:31.283 [2024-11-17 18:56:17.555823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.555902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.556223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.556300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.556589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.556650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.556938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.557016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.557326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.557403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.557640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.557717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.558019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.558098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.558339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.558417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.558697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.558760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.558996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.559073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.559349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.559409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.559670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.559748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.560017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.560096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.560362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.560439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.560721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.560783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.561062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.561147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.561440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.561518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.561792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.561854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.562103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.562184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.562428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.562505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.562739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.562801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.563106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.563186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.563444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.563521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.563755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.563838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.564111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.564188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.564477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.564555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.564849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.564928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.565137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.565216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.284 qpair failed and we were unable to recover it. 00:35:31.284 [2024-11-17 18:56:17.565454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.284 [2024-11-17 18:56:17.565515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.565713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.565777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.566000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.566083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.566324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.566403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.566668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.566751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.567055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.567132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.567443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.567520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.567769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.567849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.568122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.568199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.568501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.568579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.568904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.568984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.569289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.569365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.569604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.569664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.569924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.570002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.570290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.570376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.570610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.570671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.570989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.571066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.571331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.571408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.571690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.571752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.572024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.572102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.572383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.572461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.572790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.572873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.573165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.573243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.573449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.573527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.573761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.573841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.574085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.574165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.574430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.574508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.574703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.574764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.575112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.575210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.575516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.575596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.575840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.575904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.576092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.576158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.576385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.576450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.576729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.285 [2024-11-17 18:56:17.576793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.285 qpair failed and we were unable to recover it. 00:35:31.285 [2024-11-17 18:56:17.577051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.577118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.577373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.577440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.577644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.577748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.577958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.578019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.578321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.578386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.578635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.578731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.578963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.579023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.579283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.579361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.579601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.579666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.579942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.580002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.580313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.580378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.580583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.580649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.580939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.581000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.581267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.581332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.581614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.581716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.581993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.582053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.582321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.582386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.582648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.582747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.582995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.583055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.583358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.583423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.583694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.583774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.584014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.584075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.584307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.584373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.584614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.584721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.584914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.584975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.585271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.585335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.585621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.585726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.586017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.586107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.586450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.586523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.586788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.586851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.587115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.587181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.587425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.587489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.587786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.587853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.588063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.588128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.588396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.286 [2024-11-17 18:56:17.588462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.286 qpair failed and we were unable to recover it. 00:35:31.286 [2024-11-17 18:56:17.588760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.588826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.589107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.589172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.589479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.589545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.589764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.589832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.590062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.590127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.590366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.590432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.590704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.590771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.591031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.591097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.591331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.591397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.591644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.591728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.591990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.592055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.592262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.592326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.592584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.592662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.592951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.593017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.593241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.593307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.593643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.593728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.594055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.594120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.594411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.594475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.594767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.594834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.595086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.595177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.595425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.595491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.595730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.595797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.596018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.596083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.596338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.596403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.596604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.596671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.597022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.597095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.597413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.597491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.597717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.597787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.598007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.598073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.598405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.598472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.598741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.598808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.599036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.599101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.599393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.599457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.287 qpair failed and we were unable to recover it. 00:35:31.287 [2024-11-17 18:56:17.599749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.287 [2024-11-17 18:56:17.599858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.600104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.600174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.600421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.600489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.600794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.600860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.601108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.601176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.601397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.601469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.601723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.601791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.602061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.602127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.602348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.602414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.602612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.602696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.602898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.602963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.603173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.603272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.603536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.603603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.603841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.603909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.604157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.604221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.604524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.604589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.604841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.604908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.605238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.605305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.605577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.605643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.605893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.605958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.606253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.606319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.606567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.606635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.606977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.607044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.607304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.607370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.607664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.607751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.607965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.608031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.608283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.608348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.608633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.608737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.609040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.609106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.609353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.609421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.609643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.609736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.610020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.610086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.610402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.610468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.610766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.610834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.611083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.611149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.611336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.288 [2024-11-17 18:56:17.611404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.288 qpair failed and we were unable to recover it. 00:35:31.288 [2024-11-17 18:56:17.611705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.611774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.612086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.612152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.612452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.612518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.612740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.612806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.613048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.613114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.613365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.613433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.613723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.613791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.614060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.614125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.614416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.614482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.614731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.614796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.615106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.615186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.615442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.615508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.615711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.615777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.616034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.616099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.616303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.616371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.616583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.616652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.616945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.617011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.617212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.617276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.617511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.617576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.617894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.617961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.618240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.618330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.618586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.618652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.618865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.618932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.619233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.619297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.619570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.289 [2024-11-17 18:56:17.619636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.289 qpair failed and we were unable to recover it. 00:35:31.289 [2024-11-17 18:56:17.619920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.619990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.620247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.620313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.620607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.620691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.620907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.620971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.621246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.621344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.621654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.621738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.621965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.622030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.622292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.622358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.622606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.622701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.622956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.623024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.623234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.623301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.623507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.623595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.623860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.623929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.624176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.624243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.624494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.624561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.624858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.624926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.625184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.625250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.625499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.625566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.625869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.625937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.626247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.626316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.626556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.626621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.626878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.626944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.627217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.627283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.627535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.627636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.627950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.628017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.628312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.290 [2024-11-17 18:56:17.628389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.290 qpair failed and we were unable to recover it. 00:35:31.290 [2024-11-17 18:56:17.628646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.628732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.629025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.629091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.629345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.629411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.629727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.629794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.630050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.630117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.630366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.630432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.630706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.630772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.631023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.631093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.631344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.631409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.631704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.631771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.632026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.632092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.632335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.632401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.632730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.632798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.633039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.633105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.633390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.633456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.633755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.633822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.634080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.634145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.634467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.634535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.634825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.634893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.635085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.635150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.635421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.635485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.635764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.635858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.636155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.636220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.636535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.636600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.636897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.636965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.637222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.637286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.637649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.637738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.638002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.638068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.638368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.638432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.638709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.638776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.639047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.639115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.639380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.291 [2024-11-17 18:56:17.639445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.291 qpair failed and we were unable to recover it. 00:35:31.291 [2024-11-17 18:56:17.639726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.639795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.640057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.640122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.640370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.640434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.640710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.640778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.641065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.641154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.641456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.641521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.641808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.641874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.642133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.642208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.642493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.642559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.642828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.642897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.643189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.643255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.643497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.643563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.643879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.643947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.644214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.644280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.644569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.644633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.644904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.644971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.645229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.645293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.645522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.645614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.645885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.645952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.646247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.646313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.646630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.646713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.646959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.647027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.647326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.647392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.647635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.647740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.648037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.648102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.648301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.648369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.648611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.648730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.292 [2024-11-17 18:56:17.649031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.292 [2024-11-17 18:56:17.649097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.292 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.649412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.649478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.649750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.649820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.650067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.650133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.650366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.650435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.650658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.650740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.650995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.651060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.651302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.651368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.651717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.651783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.652070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.652135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.652396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.652461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.652722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.652788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.653034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.653103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.653356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.653422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.653728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.653795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.654032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.654097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.654323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.654388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.654691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.654757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.654950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.655050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.655318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.655384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.655605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.655720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.655935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.656005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.656265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.656329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.656580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.656647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.656888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.656956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.657210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.657275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.293 [2024-11-17 18:56:17.657516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.293 [2024-11-17 18:56:17.657582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.293 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.657847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.657915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.658197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.658262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.658512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.658576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.658884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.658950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.659232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.659301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.659554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.659621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.659851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.659918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.660164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.660231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.660444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.660509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.660754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.660821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.661113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.661178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.661479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.661544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.661807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.661876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.662166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.662230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.662515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.662580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.662857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.662924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.663158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.663223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.663513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.663578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.663882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.663948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.664203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.664267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.664572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.664637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.664918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.664984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.665279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.665343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.665585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.665650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.665934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.665999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.666291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.666355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.666654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.666775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.294 [2024-11-17 18:56:17.667068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.294 [2024-11-17 18:56:17.667134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.294 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.667400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.667464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.667751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.667818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.668072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.668137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.668424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.668490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.668709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.668777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.669070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.669146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.669457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.669523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.669777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.669844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.670144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.670209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.670420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.670484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.670732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.670801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.671063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.671129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.671351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.671415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.671708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.671775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.672027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.672093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.672353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.672417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.672664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.672744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.673010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.673075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.673365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.673429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.673737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.673803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.674094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.674160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.674404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.674471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.674778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.674845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.675104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.675169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.675416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.675482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.675747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.675814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.676002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.676068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.676276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.676346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.295 [2024-11-17 18:56:17.676635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.295 [2024-11-17 18:56:17.676718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.295 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.677010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.677076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.677371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.677436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.677703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.677772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.678055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.678121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.678381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.678450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.678712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.678780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.679020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.679085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.679373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.679437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.679738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.679805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.680094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.680158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.680447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.680513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.680819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.680886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.681088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.681156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.681396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.681462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.681774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.681839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.682106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.682171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.682461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.682538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.682787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.682853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.683116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.683180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.683470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.683534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.683726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.683793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.684063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.684129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.684399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.684464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.684693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.684758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.684996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.685062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.685315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.685380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.685666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.685744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.685998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.686064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.686309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.686375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.686661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.686738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.687018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.687084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.687332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.296 [2024-11-17 18:56:17.687399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.296 qpair failed and we were unable to recover it. 00:35:31.296 [2024-11-17 18:56:17.687721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.687787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.688076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.688143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.688443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.688507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.688794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.688861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.689104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.689170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.689439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.689504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.689799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.689865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.690170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.690235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.690482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.690548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.690832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.690897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.691194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.691260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.691532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.691598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.691819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.691884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.692138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.692203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.692445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.692510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.692781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.692850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.693054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.693123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.693422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.693487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.693771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.693838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.694138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.694202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.694460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.694524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.694781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.694847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.695133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.695197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.695449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.695514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.695800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.695880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.696168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.696234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.696526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.696590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.696911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.696977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.697223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.697291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.697581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.697646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.697922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.697988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.698269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.698335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.698577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.698641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.698872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.698937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.699170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.699235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.699438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.699511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.699764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.699832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.297 [2024-11-17 18:56:17.700091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.297 [2024-11-17 18:56:17.700157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.297 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.700469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.700535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.700796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.700863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.701129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.701193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.701442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.701506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.701799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.701866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.702116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.702180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.702378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.702446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.702739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.702805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.703067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.703133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.703394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.703459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.703752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.703819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.704111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.704177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.704464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.704528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.704833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.704901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.705193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.705257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.705476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.705543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.705830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.705897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.706168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.706232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.706435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.706501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.706754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.706822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.707071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.707138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.707396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.707462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.707699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.707766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.708027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.708091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.708345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.708413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.708712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.708780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.709085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.709161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.709416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.709482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.709734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.709801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.710100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.710165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.710397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.710462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.710756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.710824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.711087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.711151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.711395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.711460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.711747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.711813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.712114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.712180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.712424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.298 [2024-11-17 18:56:17.712490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.298 qpair failed and we were unable to recover it. 00:35:31.298 [2024-11-17 18:56:17.712777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.712844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.713103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.713168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.713406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.713470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.713767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.713835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.714066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.714133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.714394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.714458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.714748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.714815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.715110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.715175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.715436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.715500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.715803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.715870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.716177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.716242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.716482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.716550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.716818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.716886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.717148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.717213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.717467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.717533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.717830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.717896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.718202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.718269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.718530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.718596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.718916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.718983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.719236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.719301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.719595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.719660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.719929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.719996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.720241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.720306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.720546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.720612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.720850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.720886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.721032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.721067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.721203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.721237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.721386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.721420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.721562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.721598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.721736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.721781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.721951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.721986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.722126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.722160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.722365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.722430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.722735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.722771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.722888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.722923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.723054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.723120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.723370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.723438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.299 [2024-11-17 18:56:17.723688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.299 [2024-11-17 18:56:17.723723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.299 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.723838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.723872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.724100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.724166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.724422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.724486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.724770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.724816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.725038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.725129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.725501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.725595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.725840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.725890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.726105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.726195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.726505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.726589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.726847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.726891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.727156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.727240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.727500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.727581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.727819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.727863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.728003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.728055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.728194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.728245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.728406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.728454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.728614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.728665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.728862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.728907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.729079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.729147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.729328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.729385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.729595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.729665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.729806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.729844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.730000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.730035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.730245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.730308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.730488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.730553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.730691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.730727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.730904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.730939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.731123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.731177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.731365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.731425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.731564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.731599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.731722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.731807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.732058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.732137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.732440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.300 [2024-11-17 18:56:17.732505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.300 qpair failed and we were unable to recover it. 00:35:31.300 [2024-11-17 18:56:17.732713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.732750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.732893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.732928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.733194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.733259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.733525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.733591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.733814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.733850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.733986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.734052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.734343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.734408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.734726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.734761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.734907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.734942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.735138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.735204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.735413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.735478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.735730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.735765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.735918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.735952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.736139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.736204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.736462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.736527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.736766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.736801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.736968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.737036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.737310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.737376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.737582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.737646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.737817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.737853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.737960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.738031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.738335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.738399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.738701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.738753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.738898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.738933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.739069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.739104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.739316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.739382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.739590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.739658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.739847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.739882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.739993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.740072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.740354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.740420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.740648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.740694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.740809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.740844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.740952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.740989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.741124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.741195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.741442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.741507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.741746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.741781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.741956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.741991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.301 [2024-11-17 18:56:17.742122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.301 [2024-11-17 18:56:17.742190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.301 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.742479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.742555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.742785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.742821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.742935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.742969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.743089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.743124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.743333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.743399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.743663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.743738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.743873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.743907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.744075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.744109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.744300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.744366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.744573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.744639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.744874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.744908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.745049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.745114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.745359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.745425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.745624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.745658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.745857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.745892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.746092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.746158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.746405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.746470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.746730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.746765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.746911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.746946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.747086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.747149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.747364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.747429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.747723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.747759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.747902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.747937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.748121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.748188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.748435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.748502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.748722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.748757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.748903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.748956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.749203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.749237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.749375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.749409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.749570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.749637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.749919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.749986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.750291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.750358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.750550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.750616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.750919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.750986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.751286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.751352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.751597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.751662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.751986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.302 [2024-11-17 18:56:17.752052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.302 qpair failed and we were unable to recover it. 00:35:31.302 [2024-11-17 18:56:17.752254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.752320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.752607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.752672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.752910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.752977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.753277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.753353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.753610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.753695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.753960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.754026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.754282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.754349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.754595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.754662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.754903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.754969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.755254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.755318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.755562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.755630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.755915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.755980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.756270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.756335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.756589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.756658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.756975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.757040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.757297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.757362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.757572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.757639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.757932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.757998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.758283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.758349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.758641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.758726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.758977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.759044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.759309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.759375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.759671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.759776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.760065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.760130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.760409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.760474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.760762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.760830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.761097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.761161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.761408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.761473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.761774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.761840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.762128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.762192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.762423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.762488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.762751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.762818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.763105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.763170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.763415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.763483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.763789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.763855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.764155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.764220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.764523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.764588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.764897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.303 [2024-11-17 18:56:17.764964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.303 qpair failed and we were unable to recover it. 00:35:31.303 [2024-11-17 18:56:17.765208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.765276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.765562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.765628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.765931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.765996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.766223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.766290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.766488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.766556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.766830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.766897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.767205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.767271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.767526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.767592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.767888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.767956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.768164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.768229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.768513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.768578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.768854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.768921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.769183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.769248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.769496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.769562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.769875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.769943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.770239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.770304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.770552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.770617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.770897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.770964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.771225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.771291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.771600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.771666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.771981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.772047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.772347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.772412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.772721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.772787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.773077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.773142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.773390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.773456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.773757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.773823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.774079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.774144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.774400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.774465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.774726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.774794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.775082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.775147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.775388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.775453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.775748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.775815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.776105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.776181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.776417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.776482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.776775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.776842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.777094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.777162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.777448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.304 [2024-11-17 18:56:17.777513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.304 qpair failed and we were unable to recover it. 00:35:31.304 [2024-11-17 18:56:17.777715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.777782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.777988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.778056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.778309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.778374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.778627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.778704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.779001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.779065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.779353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.779418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.779708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.779774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.780071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.780135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.780385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.780452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.780767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.780835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.781139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.781204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.781456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.781521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.781766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.781833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.782019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.782083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.782370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.782434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.782747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.782813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.783099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.783164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.783407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.783474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.783672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.783755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.783994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.784059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.784309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.784375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.784669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.784752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.785014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.785081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.785269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.785337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.785589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.785656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.785980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.786046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.786328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.786393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.786642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.786727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.787030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.787094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.787314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.787381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.787607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.787708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.788007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.788072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.788266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.788333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.788621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.788707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.305 qpair failed and we were unable to recover it. 00:35:31.305 [2024-11-17 18:56:17.789025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.305 [2024-11-17 18:56:17.789059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.789182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.789222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.789348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.789383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.789531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.789565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.789686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.789737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.789876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.789910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.790045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.790077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.790253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.790289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.790429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.790464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.790752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.790786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.790901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.790935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.791060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.791093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.791267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.791332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.791624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.791702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.791865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.791898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.792061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.792126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.792415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.792480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.792696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.792750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.792915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.792949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.793167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.793200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.793333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.793383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.793501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.793535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.793723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.793757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.793901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.793934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.794098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.794130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.794369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.794435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.794701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.794752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.794898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.794931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.795073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.795106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.795248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.795280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.795578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.795612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.795865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.795898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.796112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.796178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.796484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.796549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.796797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.796830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.796933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.796992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.797247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.797312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.797592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.306 [2024-11-17 18:56:17.797657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.306 qpair failed and we were unable to recover it. 00:35:31.306 [2024-11-17 18:56:17.797824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.797857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.798063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.798128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.798382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.798447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.798741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.798780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.798921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.798954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.799197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.799230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.799395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.799461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.799688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.799757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.799902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.799935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.800101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.800166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.800415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.800491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.800710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.800756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.800901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.800934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.801150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.801214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.801586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.801620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.801784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.801818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.801975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.802045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.802328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.802363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.802534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.802620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.802818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.802850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.802963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.802996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.803203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.803268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.803492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.803556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.803755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.803789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.803926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.803959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.804094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.804128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.804306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.804339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.804490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.804522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.804650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.804701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.804917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.804993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.805264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.805299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.805440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.805475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.805781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.805848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.806097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.806164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.806482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.806557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.806791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.806871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.807090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.807151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.807416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.307 [2024-11-17 18:56:17.807476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.307 qpair failed and we were unable to recover it. 00:35:31.307 [2024-11-17 18:56:17.807767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.807829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.808101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.808160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.808368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.808427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.808638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.808710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.808961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.809026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.809222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.809291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.809573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.809633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.809886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.809946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.810131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.810164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.810341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.810405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.810686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.810747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.811012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.811090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.811312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.811347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.811453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.811489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.811751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.811814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.812100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.812187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.812455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.812514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.812809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.812887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.813192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.813275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.813583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.813644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.813946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.814042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.814316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.814376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.814641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.814713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.814931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.815012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.815308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.815395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.815666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.815718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.815842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.815876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.816107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.816185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.816433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.816511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.816800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.816879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.817142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.817220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.817519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.817603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.817876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.817955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.818213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.818290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.818523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.818585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.818855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.818934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.819225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.819307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.308 [2024-11-17 18:56:17.819564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.308 [2024-11-17 18:56:17.819625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.308 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.819984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.820067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.820281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.820365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.820591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.820651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.820973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.821055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.821353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.821430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.821721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.821784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.822040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.822118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.822476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.822563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.822808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.822869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.823170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.823252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.823527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.823586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.823868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.823947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.824248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.824331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.824606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.824666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.824923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.825036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.825276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.825355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.825601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.825661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.825935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.826014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.826307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.826394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.826691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.826753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.827035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.827121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.827398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.827459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.827732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.827795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.828057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.828136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.828370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.828448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.828666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.828774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.829076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.829161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.829444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.829506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.829709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.829779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.830030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.830108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.830415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.830486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.830758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.830844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.831066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.309 [2024-11-17 18:56:17.831147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.309 qpair failed and we were unable to recover it. 00:35:31.309 [2024-11-17 18:56:17.831426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.310 [2024-11-17 18:56:17.831487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.310 qpair failed and we were unable to recover it. 00:35:31.310 [2024-11-17 18:56:17.831744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.310 [2024-11-17 18:56:17.831826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.310 qpair failed and we were unable to recover it. 00:35:31.310 [2024-11-17 18:56:17.832086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.310 [2024-11-17 18:56:17.832185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.310 qpair failed and we were unable to recover it. 00:35:31.310 [2024-11-17 18:56:17.832401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.310 [2024-11-17 18:56:17.832463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.310 qpair failed and we were unable to recover it. 00:35:31.310 [2024-11-17 18:56:17.832746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.310 [2024-11-17 18:56:17.832803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.310 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.832918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.832953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.833097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.833133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.833256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.833292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.833457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.833493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.833634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.833669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.833788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.833823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.833945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.833990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.834117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.834151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.834261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.834308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.834462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.834524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.834786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.834821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.834933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.834968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.835088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.835123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.835323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.835383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.835662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.835749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.835923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.835957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.836112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.836146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.836392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.836452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.836698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.836733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.836871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.836905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.837034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.837068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.837231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.837292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.837550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.837584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.837722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.837757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.837903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.837939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.838270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.838346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.838652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.838740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.838857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.838892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.839134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.839215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.839438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.839498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.839714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.839771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.839889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.839924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.840077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.840112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.840248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.840282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.840448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.602 [2024-11-17 18:56:17.840483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.602 qpair failed and we were unable to recover it. 00:35:31.602 [2024-11-17 18:56:17.840656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.840707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.840884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.840919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.841060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.841092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.841228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.841264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.841365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.841398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.841537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.841573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.841691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.841738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.841857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.841893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.842002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.842048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.842196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.842230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.842388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.842423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.842554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.842589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.842743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.842779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.842887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.842922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.843037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.843089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.843208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.843243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.843427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.843491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.843751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.843786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.843900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.843936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.844091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.844137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.844413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.844452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.844565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.844599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.844733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.844780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.844904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.844940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.845101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.845135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.845248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.845283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.845468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.845530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.845720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.845756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.845879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.845926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.846034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.846068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.846276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.846331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.846524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.846610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.846833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.846869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.847013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.847054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.847276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.847343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.847534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.847569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.847715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.847751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.847881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.603 [2024-11-17 18:56:17.847916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.603 qpair failed and we were unable to recover it. 00:35:31.603 [2024-11-17 18:56:17.848058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.848097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.848248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.848284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.848431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.848465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.848608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.848644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.848810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.848846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.848991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.849025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.849164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.849198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.849314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.849349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.849486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.849521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.849768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.849804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.849942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.849998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.850149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.850185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.850377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.850433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.850641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.850733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.850844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.850878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.850983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.851019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.851158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.851199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.851438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.851498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.851722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.851758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.851866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.851901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.852086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.852142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.852344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.852400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.852728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.852776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.852897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.852933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.853047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.853082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.853268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.853336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.853563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.853598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.853755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.853791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.853925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.853959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.854110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.854144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.854299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.854334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.854520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.854606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.854830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.854866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.854982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.855080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.855342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.855398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.855615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.855710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.855880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.604 [2024-11-17 18:56:17.855926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.604 qpair failed and we were unable to recover it. 00:35:31.604 [2024-11-17 18:56:17.856168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.856240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.856467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.856522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.856772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.856807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.856940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.857008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.857236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.857299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.857497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.857550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.857758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.857795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.857926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.857974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.858196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.858248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.858454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.858506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.858766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.858802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.858949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.859008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.859231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.859284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.859473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.859525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.859775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.859811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.859929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.859989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.860183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.860237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.860483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.860536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.860759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.860796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.860935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.860989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.861134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.861169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.861456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.861509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.861716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.861770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.861898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.861930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.862069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.862101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.862217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.862250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.862399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.862437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.862553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.862591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.862763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.862799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.862909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.862944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.863118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.863168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.863354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.863407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.863617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.863692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.863834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.863868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.864070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.864123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.864254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.864333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.864482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.864546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.864753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.605 [2024-11-17 18:56:17.864788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.605 qpair failed and we were unable to recover it. 00:35:31.605 [2024-11-17 18:56:17.864891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.864924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.865037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.865071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.865264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.865331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.865554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.865615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.865775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.865808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.865935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.865993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.866158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.866212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.866426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.866465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.866689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.866751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.866893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.866926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.867073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.867106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.867262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.867334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.867484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.867541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.867734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.867769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.867874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.867908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.868032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.868071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.868274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.868343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.868524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.868584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.868785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.868826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.868935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.868973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.869071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.869099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.869195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.869228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.869358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.869386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.869503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.869529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.869645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.869686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.869777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.869803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.869939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.869974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.870108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.870141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.870324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.870360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.870495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.870521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.870643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.870669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.606 [2024-11-17 18:56:17.870836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.606 [2024-11-17 18:56:17.870870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.606 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.871005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.871043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.871176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.871230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.871393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.871420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.871537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.871565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.871698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.871724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.871835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.871868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.872012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.872045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.872193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.872226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.872389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.872422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.872577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.872621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.872757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.872783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.872894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.872945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.873070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.873122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.873252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.873285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.873487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.873514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.873611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.873637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.873748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.873786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.873944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.873986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.874123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.874159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.874316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.874342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.874505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.874538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.874718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.874761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.874846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.874872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.874997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.875030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.875187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.875214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.875356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.875388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.875536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.875562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.875686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.875713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.875809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.875855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.875958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.876006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.876149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.876181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.876348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.876381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.876516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.876543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.876630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.876657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.876754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.876780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.876887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.876919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.877057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.877089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.877206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.607 [2024-11-17 18:56:17.877248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.607 qpair failed and we were unable to recover it. 00:35:31.607 [2024-11-17 18:56:17.877379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.877410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.877510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.877545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.877642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.877668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.877763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.877790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.877912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.877952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.878062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.878104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.878213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.878240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.878366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.878397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.878490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.878517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.878606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.878632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.878770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.878797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.878912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.878961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.879080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.879106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.879216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.879249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.879415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.879441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.879580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.879611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.879727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.879753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.879843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.879869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.879951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.879976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.880064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.880090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.880201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.880240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.880394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.880452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.880541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.880584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.880700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.880727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.880868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.880893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.881041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.881076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.881219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.881256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.881372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.881409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.881559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.881586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.881695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.881734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.881822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.881848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.881959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.882013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.882160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.882227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.882366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.882419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.882538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.882565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.882704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.882731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.882850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.882877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.608 [2024-11-17 18:56:17.882967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.608 [2024-11-17 18:56:17.882993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.608 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.883085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.883111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.883210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.883236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.883327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.883355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.883468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.883494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.883612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.883637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.883743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.883772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.883871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.883898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.884018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.884076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.884242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.884295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.884490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.884528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.884697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.884744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.884871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.884899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.884983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.885010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.885120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.885153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.885337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.885380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.885591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.885628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.885748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.885775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.885856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.885883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.886010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.886037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.886158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.886201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.886326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.886374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.886486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.886516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.886609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.886635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.886766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.886792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.886881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.886906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.887039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.887067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.887172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.887199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.887359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.887396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.887505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.887531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.887619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.887645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.887769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.887807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.887899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.887926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.888020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.888046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.888138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.888166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.888254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.888279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.888400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.888451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.888558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.888586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.888686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.609 [2024-11-17 18:56:17.888713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.609 qpair failed and we were unable to recover it. 00:35:31.609 [2024-11-17 18:56:17.888806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.888833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.888953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.888979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.889115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.889160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.889313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.889360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.889500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.889526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.889613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.889639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.889745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.889772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.889863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.889889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.889973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.889998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.890139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.890184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.890364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.890412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.890590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.890617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.890727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.890755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.890879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.890906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.891083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.891130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.891342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.891398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.891526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.891552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.891642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.891668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.891764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.891792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.891889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.891915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.892060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.892097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.892222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.892274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.892439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.892465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.892620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.892646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.892750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.892777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.892898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.892924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.893889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.893920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.894074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.894101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.894870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.894901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.895080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.895110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.895219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.895264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.895397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.895426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.895579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.895605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.895696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.895724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.895816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.895842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.895923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.895951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.896042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.896067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.610 [2024-11-17 18:56:17.896196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.610 [2024-11-17 18:56:17.896222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.610 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.896360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.896391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.896520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.896547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.896663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.896697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.896812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.896839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.896969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.897001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.897140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.897185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.897304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.897347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.897457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.897485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.897577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.897603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.897713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.897740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.897825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.897852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.897939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.897965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.898070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.898101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.898197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.898229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.898334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.898361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.898450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.898477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.898569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.898596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.898716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.898743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.898826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.898853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.898940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.898966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.899064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.899092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.899210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.899236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.899348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.899374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.899507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.899536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.899617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.899644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.899737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.899764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.899864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.899890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.899976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.900003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.900102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.900129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.900216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.900243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.900329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.900355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.900467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.900493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.611 [2024-11-17 18:56:17.900608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.611 [2024-11-17 18:56:17.900645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.611 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.900791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.900822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.900939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.900977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.901093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.901142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.901284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.901330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.901415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.901442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.901536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.901563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.901662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.901697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.901792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.901819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.901919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.901946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.902041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.902067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.902150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.902177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.902273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.902301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.902456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.902485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.902601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.902629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.902732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.902759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.902852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.902879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.902961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.902988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.903084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.903111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.903200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.903226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.903337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.903367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.903455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.903481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.903571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.903605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.903704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.903732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.903822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.903849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.903946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.903972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.904080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.904106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.904195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.904222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.904330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.904362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.904474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.904516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.904605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.904632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.904760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.904787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.904876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.904903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.904998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.905040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.905169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.905200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.905299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.905344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.905496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.905522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.612 [2024-11-17 18:56:17.905636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.612 [2024-11-17 18:56:17.905662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.612 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.905757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.905783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.905874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.905900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.905984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.906010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.906107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.906152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.906241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.906271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.906429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.906477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.906586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.906613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.906706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.906733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.906823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.906850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.906972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.907014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.907123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.907153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.907256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.907289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.907426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.907466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.907601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.907627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.907730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.907757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.907846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.907872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.907956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.907982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.908120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.908150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.908260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.908303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.908401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.908432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.908552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.908578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.908690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.908717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.908808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.908835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.908936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.908963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.909062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.909093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.909198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.909228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.909357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.909390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.909555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.909594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.909727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.909756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.909843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.909869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.909971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.910002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.910082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.910108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.910220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.910246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.910330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.910358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.910452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.910478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.910559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.910600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.910721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.910749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.613 qpair failed and we were unable to recover it. 00:35:31.613 [2024-11-17 18:56:17.910843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.613 [2024-11-17 18:56:17.910868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.910956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.910983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.911058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.911084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.911177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.911203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.911323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.911350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.911433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.911460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.911551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.911582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.911719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.911748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.911833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.911861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.911986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.912017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.912158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.912184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.912313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.912359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.912448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.912480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.912582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.912608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.912726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.912754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.912871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.912897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.912989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.913016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.913105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.913131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.913213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.913240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.913373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.913402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.913495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.913522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.913604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.913630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.913733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.913759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.913865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.913891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.913986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.914013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.914131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.914178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.914324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.914368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.914455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.914482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.914564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.914590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.914686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.914714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.914803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.914830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.914910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.914936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.915052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.915091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.915190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.915217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.915305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.915333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.915434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.915461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.915585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.915623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.915739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.614 [2024-11-17 18:56:17.915768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.614 qpair failed and we were unable to recover it. 00:35:31.614 [2024-11-17 18:56:17.915877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.915904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.916002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.916029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.916120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.916153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.916268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.916294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.916381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.916408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.916498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.916524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.916610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.916636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.916736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.916765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.916909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.916935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.917100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.917149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.917274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.917323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.917545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.917590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.917726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.917754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.917836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.917863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.917979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.918031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.918210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.918236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.918323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.918349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.918433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.918459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.918545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.918572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.918666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.918700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.918785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.918812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.919005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.919032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.919118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.919144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.919244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.919270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.919352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.919378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.919467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.919493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.919638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.919665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.919799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.919826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.919947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.919974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.920070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.920096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.920182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.920209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.920400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.920426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.920540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.920566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.615 [2024-11-17 18:56:17.920648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.615 [2024-11-17 18:56:17.920681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.615 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.920804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.920830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.920913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.920939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.921038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.921066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.921192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.921219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.921319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.921345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.921439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.921466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.921656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.921690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.921788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.921819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.921913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.921939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.922021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.922049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.922189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.922216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.922307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.922333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.922446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.922473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.922556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.922583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.922669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.922704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.922795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.922822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.922908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.922935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.923053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.923079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.923160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.923186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.923299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.923326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.923418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.923444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.923542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.923568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.923687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.923714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.923804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.923830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.923915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.923941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.924068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.924095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.924217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.924243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.924363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.924389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.924515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.924542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.924624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.924651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.924806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.924845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.924955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.924994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.925122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.925149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.925238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.925265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.925376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.925403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.925512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.925538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.616 [2024-11-17 18:56:17.925627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.616 [2024-11-17 18:56:17.925654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.616 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.925742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.925768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.925857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.925883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.925995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.926021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.926110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.926156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.926269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.926295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.926406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.926436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.926532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.926567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.926737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.926766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.926857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.926883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.926979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.927007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.927121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.927153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.927237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.927264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.927379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.927413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.927547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.927581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.927751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.927778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.927890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.927916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.928042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.928103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.928237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.928271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.928421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.928455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.928573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.928607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.928757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.928784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.928874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.928901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.929025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.929051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.929163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.929197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.929364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.929402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.929594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.929638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.929784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.929823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.929933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.929965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.930196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.930227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.930386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.930434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.930578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.930604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.930706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.930733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.930869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.930915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.931047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.931080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.931188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.931225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.931430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.931483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.931623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.931649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.617 [2024-11-17 18:56:17.931795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.617 [2024-11-17 18:56:17.931826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.617 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.931949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.931975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.932107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.932149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.932264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.932309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.932465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.932499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.932659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.932694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.932791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.932817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.932901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.932926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.933053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.933080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.933174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.933200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.933349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.933384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.933546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.933582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.933755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.933783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.933871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.933897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.934008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.934048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.934207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.934261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.934360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.934387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.934504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.934530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.934628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.934667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.934800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.934828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.934914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.934940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.935113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.935149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.935302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.935352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.935492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.935543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.935658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.935699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.935807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.935833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.935916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.935943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.936067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.936104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.936238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.936268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.936420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.936450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.936572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.936598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.936713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.936740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.936837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.936863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.936950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.936976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.937056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.937082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.937208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.937240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.937380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.937411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.937532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.937563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.937702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.618 [2024-11-17 18:56:17.937745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.618 qpair failed and we were unable to recover it. 00:35:31.618 [2024-11-17 18:56:17.937832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.937859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.937961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.938003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.938105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.938133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.938281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.938311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.938404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.938434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.938538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.938568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.938691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.938731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.938829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.938858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.939071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.939124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.939317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.939353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.939454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.939480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.939592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.939619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.939719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.939748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.939842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.939868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.939989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.940016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.940151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.940189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.940292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.940318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.940406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.940433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.940514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.940542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.940695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.940722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.940821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.940848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.940934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.940961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.941122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.941148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.941261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.941299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.941421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.941447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.941536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.941562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.941687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.941715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.941825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.941852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.942051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.942083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.942210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.942237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.942381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.942409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.942545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.942571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.942670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.942716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.942795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.942822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.942899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.942925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.943037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.943064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.943195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.943222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.619 qpair failed and we were unable to recover it. 00:35:31.619 [2024-11-17 18:56:17.943340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.619 [2024-11-17 18:56:17.943366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.943458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.943486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.943576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.943602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.943716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.943744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.943835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.943861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.943976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.944003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.944097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.944125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.944245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.944271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.944365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.944391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.944497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.944524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.944607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.944633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.944724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.944752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.944843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.944869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.944955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.944992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.945087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.945113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.945208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.945236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.945389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.945416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.945551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.945578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.945688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.945727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.945821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.945850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.945939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.945967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.946123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.946169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.946369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.946421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.946546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.946576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.946688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.946732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.946834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.946864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.946964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.946994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.947098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.947128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.947270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.947300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.947422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.947452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.947584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.947610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.947734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.947766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.947860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.947886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.620 [2024-11-17 18:56:17.948023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.620 [2024-11-17 18:56:17.948053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.620 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.948189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.948219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.948375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.948404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.948563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.948593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.948745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.948773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.948885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.948912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.949075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.949119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.949250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.949280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.949408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.949438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.949629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.949660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.949781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.949808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.949900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.949926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.950054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.950081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.950228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.950258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.950360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.950390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.950523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.950565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.950653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.950687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.950780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.950806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.950921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.950947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.951118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.951156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.951283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.951314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.951504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.951534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.951653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.951689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.951825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.951852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.951941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.951974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.952121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.952161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.952285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.952313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.952430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.952461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.952603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.952633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.952755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.952786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.952901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.952940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.953162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.953210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.953344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.953393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.953517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.953544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.953638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.953664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.953805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.953833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.953949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.953976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.954071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.954097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.621 qpair failed and we were unable to recover it. 00:35:31.621 [2024-11-17 18:56:17.954202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.621 [2024-11-17 18:56:17.954229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.954327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.954354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.954445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.954476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.954592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.954619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.954733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.954760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.954860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.954890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.955020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.955065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.955233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.955272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.955491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.955539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.955629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.955655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.955786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.955813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.955911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.955942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.956082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.956138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.956285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.956340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.956507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.956535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.956628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.956655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.956777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.956804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.956909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.956939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.957035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.957085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.957191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.957227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.957346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.957374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.957466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.957494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.957604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.957630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.957753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.957799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.957936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.957975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.958106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.958134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.958270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.958304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.958459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.958500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.958644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.958681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.958842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.958868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.959064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.959108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.959333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.959391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.959573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.959605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.959738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.959765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.959877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.959904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.960002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.960027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.960130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.960161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.960283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.960314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.622 [2024-11-17 18:56:17.960440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.622 [2024-11-17 18:56:17.960470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.622 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.960574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.960600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.960717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.960745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.960867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.960893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.961007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.961033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.961112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.961139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.961275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.961305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.961498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.961528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.961633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.961663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.961780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.961806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.961899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.961925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.962010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.962036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.962166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.962196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.962390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.962423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.962540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.962566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.962684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.962711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.962832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.962863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.962956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.962982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.963160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.963190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.963385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.963415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.963542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.963573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.963724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.963753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.963865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.963891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.963977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.964002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.964124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.964154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.964269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.964315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.964466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.964505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.964636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.964668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.964789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.964816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.964900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.964927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.965058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.965085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.965266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.965316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.965442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.965472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.965569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.965602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.965744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.965772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.965864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.965892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.966036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.966075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.966291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.966330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.623 [2024-11-17 18:56:17.966530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.623 [2024-11-17 18:56:17.966595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.623 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.966780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.966808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.966892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.966918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.967011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.967038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.967178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.967204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.967351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.967412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.967551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.967581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.967705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.967750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.967837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.967866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.967978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.968006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.968120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.968146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.968278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.968317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.968468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.968507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.968630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.968661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.968814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.968840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.968928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.968955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.969060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.969090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.969217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.969248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.969348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.969378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.969513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.969545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.969696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.969736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.969862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.969892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.970026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.970059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.970249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.970295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.970426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.970456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.970565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.970592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.970710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.970739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.970856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.970882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.971007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.971046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.971187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.971240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.971398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.971439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.971561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.971590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.971731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.971771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.971869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.971897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.972033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.972065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.972211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.972242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.972338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.972369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.972463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.624 [2024-11-17 18:56:17.972509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.624 qpair failed and we were unable to recover it. 00:35:31.624 [2024-11-17 18:56:17.972646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.972680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.972776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.972802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.972910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.972954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.973047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.973077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.973207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.973238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.973373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.973403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.973519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.973558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.973690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.973719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.973846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.973875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.973994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.974037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.974164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.974194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.974320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.974365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.974517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.974556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.974718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.974745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.974850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.974879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.975033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.975072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.975290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.975333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.975532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.975573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.975706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.975739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.975848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.975874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.976004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.976040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.976212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.976265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.976444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.976500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.976666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.976716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.976813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.976841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.976954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.977000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.977106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.977159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.977246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.977272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.977411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.977437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.977556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.977582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.977730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.977758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.977844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.977871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.977964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.977990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.625 [2024-11-17 18:56:17.978101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.625 [2024-11-17 18:56:17.978127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.625 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.978236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.978271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.978393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.978419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.978511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.978538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.978645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.978672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.978798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.978824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.978931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.978971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.979120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.979148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.979249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.979287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.979411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.979438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.979590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.979617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.979750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.979778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.979912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.979943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.980142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.980181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.980318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.980372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.980542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.980582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.980756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.980798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.980905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.980931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.981063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.981093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.981253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.981306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.981504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.981556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.981716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.981745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.981862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.981890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.982056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.982100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.982267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.982313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.982428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.982479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.982622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.982649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.982747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.982793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.982955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.983019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.983158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.983209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.983334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.983376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.983506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.983545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.983694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.983737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.983845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.983875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.984001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.984041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.984247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.984286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.984409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.984465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.984597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.984623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.626 qpair failed and we were unable to recover it. 00:35:31.626 [2024-11-17 18:56:17.984743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.626 [2024-11-17 18:56:17.984770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.984925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.984955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.985174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.985223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.985414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.985463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.985659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.985694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.985781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.985808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.985987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.986031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.986178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.986227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.986424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.986474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.986596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.986622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.986757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.986791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.986887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.986918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.987039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.987069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.987200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.987248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.987396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.987453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.987641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.987706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.987822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.987849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.987972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.988003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.988178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.988229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.988363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.988418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.988653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.988739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.988845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.988871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.988997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.989023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.989134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.989159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.989359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.989398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.989525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.989587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.989740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.989767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.989882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.989908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.990034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.990077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.990241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.990282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.990499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.990538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.990697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.990731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.990821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.990847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.990932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.990960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.991073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.991104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.991222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.991256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.991450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.991500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.991617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.991643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.627 [2024-11-17 18:56:17.991759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.627 [2024-11-17 18:56:17.991786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.627 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.991868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.991894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.991989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.992019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.992173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.992214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.992430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.992471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.992598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.992624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.992737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.992770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.992906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.992965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.993120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.993173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.993292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.993342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.993508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.993539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.993671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.993708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.993799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.993826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.993945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.993973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.994112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.994139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.994253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.994297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.994441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.994483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.994633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.994659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.994752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.994778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.994869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.994896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.994993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.995021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.995114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.995141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.995252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.995283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.995412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.995441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.995566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.995597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.995749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.995776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.995890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.995917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.996013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.996056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.996188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.996218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.996373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.996403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.996559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.996589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.996718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.996761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.996878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.996904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.997044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.997074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.997157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.997182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.997282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.997313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.997424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.997453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.997537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.997567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.628 qpair failed and we were unable to recover it. 00:35:31.628 [2024-11-17 18:56:17.997705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.628 [2024-11-17 18:56:17.997732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.997826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.997852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.997993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.998027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.998158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.998211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.998342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.998373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.998498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.998527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.998634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.998660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.998782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.998809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.998920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.998967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.999115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.999164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.999357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.999400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.999562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.999594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.999760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.999787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:17.999902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:17.999928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.000090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.000142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.000254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.000281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.000462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.000492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.000644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.000681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.000810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.000845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.001009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.001057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.001224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.001254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.001375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.001405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.001530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.001561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.001663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.001708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.001840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.001870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.001968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.001998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.002123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.002153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.002284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.002313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.002439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.002470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.002647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.002684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.002778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.002809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.002951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.003003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.003160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.003210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.629 [2024-11-17 18:56:18.003352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.629 [2024-11-17 18:56:18.003386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.629 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.003528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.003558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.003699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.003730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.003881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.003933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.004065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.004116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.004242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.004272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.004403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.004432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.004561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.004599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.004749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.004783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.004915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.004946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.005078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.005108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.005231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.005262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.005363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.005394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.005523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.005553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.005689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.005720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.005841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.005871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.006030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.006061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.006214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.006255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.006419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.006470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.006572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.006607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.006770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.006804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.006958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.007009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.007108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.007139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.007270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.007322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.007450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.007480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.007604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.007634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.007785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.007834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.008022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.008065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.008199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.008242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.008377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.008418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.008588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.008629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.008803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.008834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.008945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.009002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.009097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.009128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.009278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.009329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.009487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.009517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.009707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.009755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.009882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.009932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.630 [2024-11-17 18:56:18.010117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.630 [2024-11-17 18:56:18.010171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.630 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.010317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.010370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.010503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.010533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.010651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.010688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.010848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.010902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.011034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.011087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.011235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.011288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.011420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.011450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.011578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.011610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.011741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.011774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.011900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.011930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.012024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.012054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.012177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.012209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.012340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.012370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.012499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.012529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.012647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.012688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.012821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.012852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.012971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.013019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.013155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.013207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.013383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.013425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.013560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.013601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.013796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.013828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.013957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.013987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.014156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.014198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.014412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.014455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.014642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.014683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.014795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.014825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.014949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.014980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.015109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.015160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.015305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.015346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.015488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.015544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.015731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.015763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.015869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.015921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.016106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.016148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.016344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.016386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.016533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.016578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.016707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.016739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.016869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.016900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.016999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.631 [2024-11-17 18:56:18.017029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.631 qpair failed and we were unable to recover it. 00:35:31.631 [2024-11-17 18:56:18.017151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.017193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.017324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.017368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.017565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.017615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.017807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.017852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.018051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.018106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.018199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.018229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.018332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.018362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.018500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.018532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.018703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.018734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.018925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.018978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.019172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.019221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.019371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.019422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.019579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.019612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.019752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.019784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.019940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.019991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.020169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.020210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.020348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.020390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.020564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.020606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.020751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.020790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.020894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.020925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.021095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.021136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.021350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.021391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.021535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.021565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.021682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.021715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.021809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.021843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.022003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.022057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.022212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.022253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.022382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.022433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.022616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.022659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.022860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.022890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.023012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.023043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.023168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.023221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.023366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.023409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.023630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.023698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.023899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.023929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.024043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.024082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.024246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.024290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.024469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.632 [2024-11-17 18:56:18.024513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.632 qpair failed and we were unable to recover it. 00:35:31.632 [2024-11-17 18:56:18.024702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.024750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.024850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.024882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.025056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.025097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.025309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.025350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.025548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.025613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.025785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.025816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.025939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.025992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.026156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.026203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.026415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.026460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.026610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.026698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.026850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.026880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.026983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.027013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.027171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.027216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.027391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.027435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.027591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.027634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.027786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.027816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.027915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.027946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.028061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.028090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.028241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.028284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.028513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.028580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.028789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.028820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.028952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.029007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.029178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.029208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.029346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.029377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.029637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.029731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.029910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.029954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.030117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.030161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.030335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.030380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.030522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.030566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.030726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.030770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.030889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.030932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.031106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.031150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.031280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.031323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.031493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.031537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.031746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.031793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.031920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.031965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.032173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.032217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.032449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.633 [2024-11-17 18:56:18.032492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.633 qpair failed and we were unable to recover it. 00:35:31.633 [2024-11-17 18:56:18.032619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.032662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.032845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.032888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.033093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.033136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.033312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.033355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.033599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.033642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.033880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.033926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.034079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.034122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.034288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.034331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.034458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.034502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.034707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.034753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.034983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.035047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.035273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.035319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.035510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.035557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.035709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.035756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.035976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.036020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.036169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.036214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.036370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.036416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.036623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.036668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.036870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.036925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.037111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.037155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.037329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.037376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.037549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.037592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.037796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.037844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.038020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.038062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.038268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.038311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.038473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.038516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.038712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.038757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.038935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.038978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.039128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.039172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.039353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.039396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.039536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.039579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.039763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.039810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.039963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.634 [2024-11-17 18:56:18.040024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.634 qpair failed and we were unable to recover it. 00:35:31.634 [2024-11-17 18:56:18.040185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.040228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.040358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.040401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.040522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.040583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.040772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.040819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.043886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.043931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.044152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.044195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.044435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.044480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.044695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.044742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.044944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.044988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.045212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.045258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.045492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.045537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.045706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.045753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.045904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.045951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.046124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.046169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.046368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.046414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.046595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.046641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.046800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.046849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.047024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.047070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.047204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.047250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.047460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.047538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.047748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.047795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.047958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.048005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.048188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.048234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.048450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.048496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.048650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.048704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.048922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.048968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.049161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.049210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.049410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.049458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.049695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.049745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.049999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.050046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.050232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.050278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.050495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.050540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.050694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.050744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.050929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.635 [2024-11-17 18:56:18.050977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.635 qpair failed and we were unable to recover it. 00:35:31.635 [2024-11-17 18:56:18.051141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.051190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.051383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.051432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.051591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.051639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.051842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.051897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.052081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.052130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.052350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.052395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.052610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.052656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.052851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.052900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.053089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.053137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.053291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.053340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.053480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.053528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.053711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.053761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.053958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.054021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.054163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.054205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.054376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.054441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.054623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.054672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.054905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.054953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.055181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.055229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.055397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.055475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.055733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.055783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.056015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.056064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.056261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.056311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.056506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.056556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.056779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.056826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.057049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.057094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.057267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.057325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.057556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.057604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.057798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.057849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.058061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.058107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.058290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.058337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.058519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.058561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.058794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.058841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.059088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.059136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.059330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.059378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.059578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.059627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.059812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.059864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.060066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.060115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.636 [2024-11-17 18:56:18.060319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.636 [2024-11-17 18:56:18.060368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.636 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.060615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.060726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.060934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.060983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.061165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.061213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.061359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.061421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.061609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.061655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.061865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.061915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.062150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.062194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.062369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.062413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.062663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.062720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.062909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.062955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.063143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.063189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.063336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.063381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.063531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.063577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.063766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.063815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.064023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.064075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.064303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.064354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.064594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.064647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.064885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.064937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.065134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.065185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.065365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.065411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.065626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.065685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.065848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.065896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.066115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.066160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.066358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.066400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.066579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.066640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.066873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.066918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.067112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.067156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.067332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.067398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.067533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.067578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.067791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.067839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.067987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.068032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.068216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.068262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.068412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.068457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.068670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.068725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.068922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.068974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.069180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.069231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.069473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.069525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.637 qpair failed and we were unable to recover it. 00:35:31.637 [2024-11-17 18:56:18.069770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.637 [2024-11-17 18:56:18.069822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.069986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.070038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.070202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.070255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.070475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.070520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.070710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.070757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.070937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.070984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.071176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.071220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.071430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.071475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.071700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.071753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.071948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.072000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.072224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.072267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.072441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.072484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.072728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.072782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.073035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.073086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.073322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.073373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.073573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.073624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.073809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.073860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.074069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.074122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.074363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.074415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.074655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.074715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.074910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.074955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.075172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.075218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.075459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.075510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.075715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.075759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.075891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.075933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.076155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.076201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.076444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.076496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.076709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.076761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.076965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.077017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.077267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.077319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.077518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.077579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.077842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.077887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.078040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.078084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.078329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.078380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.078562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.078605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.638 qpair failed and we were unable to recover it. 00:35:31.638 [2024-11-17 18:56:18.078778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.638 [2024-11-17 18:56:18.078822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.079023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.079074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.079357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.079409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.079632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.079710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.079903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.079962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.080217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.080273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.080539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.080595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.080834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.080891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.081083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.081138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.081365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.081427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.081604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.081666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.081868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.081912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.082071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.082113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.082268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.082333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.082584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.082628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.082816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.082862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.083040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.083085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.083305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.083349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.083564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.083621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.083907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.083964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.084217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.084273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.084513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.084578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.084874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.084931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.085117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.085172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.085384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.085439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.085666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.085737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.085960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.086005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.086232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.086276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.086423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.086467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.086606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.086650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.086815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.086859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.087133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.087189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.087407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.087462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.087622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.087690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.639 [2024-11-17 18:56:18.087914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.639 [2024-11-17 18:56:18.087970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.639 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.088179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.088243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.088496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.088551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.088766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.088823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.089034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.089090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.089325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.089380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.089548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.089603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.089869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.089926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.090146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.090218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.090427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.090483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.090700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.090757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.090965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.091020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.091276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.091332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.091558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.091623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.091790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.091836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.092004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.092051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.092248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.092294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.092512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.092557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.092728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.092785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.092995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.093044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.093266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.093317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.093472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.093521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.093731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.093789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.094044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.094099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.094300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.094387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.094599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.094658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.094885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.094941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.095162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.095216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.095418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.095475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.095698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.095755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.095980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.096036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.096276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.096332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.096550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.096607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.096812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.096869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.097125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.097180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.097388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.097446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.097719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.097777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.098019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.640 [2024-11-17 18:56:18.098074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.640 qpair failed and we were unable to recover it. 00:35:31.640 [2024-11-17 18:56:18.098241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.098320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.098587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.098692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.098932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.098994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.099220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.099289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.099584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.099641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.099857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.099914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.100119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.100178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.100458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.100519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.100765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.100827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.101057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.101117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.101303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.101363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.101624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.101699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.101954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.102009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.102256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.102312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.102532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.102587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.102796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.102854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.103116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.103174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.103361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.103418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.103622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.103715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.103883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.103941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.104115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.104171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.104421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.104518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.104840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.104902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.105171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.105232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.105442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.105503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.105779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.105840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.106101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.106162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.106336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.106397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.106629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.106724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.106965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.107025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.107264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.107326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.107544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.107607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.107869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.107930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.108148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.108209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.108409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.108469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.108706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.108767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.108963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.641 [2024-11-17 18:56:18.109024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.641 qpair failed and we were unable to recover it. 00:35:31.641 [2024-11-17 18:56:18.109295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.109357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.109555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.109615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.109880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.109981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.110251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.110310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.110565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.110654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.110922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.110990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.111266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.111341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.111604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.111669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.111972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.112037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.112316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.112376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.112653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.112752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.113028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.113090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.113295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.113355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.113556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.113615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.113860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.113924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.114155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.114238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.114474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.114536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.114791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.114853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.115050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.115109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.115327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.115410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.115696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.115759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.115952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.116011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.116234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.116294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.116519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.116579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.116824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.116907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.117150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.117210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.117408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.117468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.117632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.117707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.117911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.117973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.118191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.118255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.118471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.118535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.118808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.118869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.119141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.119200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.119404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.119465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.119699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.119807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.642 [2024-11-17 18:56:18.120032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.642 [2024-11-17 18:56:18.120097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.642 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.120338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.120403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.120655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.120755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.120977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.121045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.121323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.121410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.121690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.121757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.122005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.122065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.122274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.122334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.122539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.122599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.122888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.122952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.123134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.123196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.123423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.123526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.123836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.123903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.124175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.124240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.124438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.124503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.124763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.124829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.125120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.125188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.125453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.125522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.125803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.125870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.126164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.126229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.126479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.126583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.126861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.126951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.127180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.127248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.127513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.127579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.127816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.127881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.128097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.128164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.128499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.128567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.128869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.128936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.129226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.129290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.129531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.129596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.129874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.129941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.130219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.130286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.643 [2024-11-17 18:56:18.130545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.643 [2024-11-17 18:56:18.130611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.643 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.130908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.130977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.131207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.131273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.131531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.131599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.131888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.131955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.132225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.132291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.132544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.132609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.132846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.132912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.133184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.133252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.133591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.133658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.133891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.133957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.134210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.134276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.134517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.134582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.134891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.134959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.135198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.135263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.135511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.135577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.135809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.135876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.136118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.136184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.136458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.136525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.136775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.136855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.137126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.137190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.137451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.137516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.137810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.137877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.138230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.138296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.138538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.138603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.138924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.139002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.139206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.139271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.139478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.139542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.139842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.139908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.140175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.140241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.140470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.140535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.140778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.140844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.141144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.141215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.141476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.141541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.141772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.141840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.142137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.142202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.142471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.142541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.142766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.142857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.143071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.143136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.143395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.143460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.143740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.644 [2024-11-17 18:56:18.143811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.644 qpair failed and we were unable to recover it. 00:35:31.644 [2024-11-17 18:56:18.144065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-11-17 18:56:18.144130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.645 qpair failed and we were unable to recover it. 00:35:31.645 [2024-11-17 18:56:18.144408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-11-17 18:56:18.144474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.645 qpair failed and we were unable to recover it. 00:35:31.645 [2024-11-17 18:56:18.144767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-11-17 18:56:18.144834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.645 qpair failed and we were unable to recover it. 00:35:31.645 [2024-11-17 18:56:18.145081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-11-17 18:56:18.145146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.645 qpair failed and we were unable to recover it. 00:35:31.645 [2024-11-17 18:56:18.145366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-11-17 18:56:18.145431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.645 qpair failed and we were unable to recover it. 00:35:31.645 [2024-11-17 18:56:18.145655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-11-17 18:56:18.145740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.645 qpair failed and we were unable to recover it. 00:35:31.645 [2024-11-17 18:56:18.145966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.645 [2024-11-17 18:56:18.146032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.645 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.146297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.146365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.146661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.146751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.147001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.147066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.147361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.147427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.147696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.147762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.147962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.148026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.148281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.148346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.148557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.148657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.148954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.149069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.149318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.149389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.149615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.149698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.149931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.150008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.150238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.150303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.150563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.150631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.150942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.151010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.151282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.151346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.151568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.151632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.151868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.924 [2024-11-17 18:56:18.151934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.924 qpair failed and we were unable to recover it. 00:35:31.924 [2024-11-17 18:56:18.152138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.152203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.152454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.152521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.152815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.152889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.153101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.153199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.153456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.153523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.153748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.153815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.154063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.154128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.154421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.154487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.154764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.154831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.155078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.155144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.155382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.155449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.155700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.155765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.155964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.156029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.156227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.156295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.156587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.156713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.156979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.157044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.157327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.157392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.157635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.157720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.157959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.158026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.158284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.158350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.158626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.158724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.158973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.159039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.159256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.159320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.159581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.159666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.159912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.159978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.160304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.160369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.160623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.160705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.160965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.161033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.161285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.161352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.161665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.161777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.162087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.162151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.162411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.162475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.925 [2024-11-17 18:56:18.162741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.925 [2024-11-17 18:56:18.162808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.925 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.163035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.163101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.163367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.163433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.163725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.163791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.164016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.164081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.164294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.164360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.164583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.164652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.164901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.164969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.165253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.165318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.165557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.165622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.165865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.165930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.166180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.166247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.166500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.166564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.166818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.166885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.167097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.167161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.167432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.167496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.167760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.167860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.168116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.168184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.168471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.168535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.168807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.168874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.169072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.169138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.169355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.169423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.169647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.169754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.170014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.170079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.170323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.170390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.170646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.170726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.170932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.170996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.171254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.171322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.171627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.171724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.172017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.172081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.172383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.172450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.172718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.172809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.173078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.173153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.173401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.173468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.926 [2024-11-17 18:56:18.173760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.926 [2024-11-17 18:56:18.173827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.926 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.174030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.174095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.174421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.174489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.174752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.174818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.175069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.175134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.175413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.175480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.175758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.175824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.175924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4d630 (9): Bad file descriptor 00:35:31.927 [2024-11-17 18:56:18.176356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.176455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.176788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.176861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.177118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.177186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.177431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.177496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.177796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.177864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.178080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.178148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.178385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.178450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.178689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.178759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.178966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.179031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.179281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.179347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.179576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.179640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.179917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.179982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.180282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.180347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.180566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.180630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.180894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.180959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.181212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.181280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.181538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.181603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.181861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.181928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.182136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.182200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.182450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.182516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.182777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.182843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.183053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.183119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.183351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.183419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.183643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.183727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.183938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.184002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.184294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.184360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.184580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.184646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.184860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.184937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.185161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.185227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.185420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.185485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.185743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.927 [2024-11-17 18:56:18.185810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.927 qpair failed and we were unable to recover it. 00:35:31.927 [2024-11-17 18:56:18.186030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.186095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.186359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.186424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.186701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.186768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.187003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.187068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.187272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.187337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.187551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.187615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.187875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.187942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.188215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.188280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.188493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.188557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.188838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.188905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.189190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.189254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.189541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.189606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.189860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.189926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.190163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.190231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.190529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.190595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.190887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.190952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.191216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.191280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.191519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.191585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.191862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.191928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.192144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.192208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.192424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.192493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.192753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.192820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.193108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.193173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.193464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.193529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.193842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.193908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.194162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.194226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.194513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.194577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.194844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.194909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.195113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.195179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.195415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.195480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.195699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.195766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.196019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.196085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.196332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.196397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.196631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.196735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.197016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.197080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.197342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.197407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.197632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.197718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.198008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.198073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.928 qpair failed and we were unable to recover it. 00:35:31.928 [2024-11-17 18:56:18.198295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.928 [2024-11-17 18:56:18.198359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.198592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.198661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.198903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.198968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.199214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.199279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.199540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.199606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.199879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.199945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.200189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.200254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.200463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.200528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.200781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.200847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.201109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.201174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.201458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.201524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.201793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.201859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.202103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.202167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.202422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.202487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.202752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.202818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.203104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.203168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.203409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.203474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.203723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.203789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.204005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.204069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.204272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.204340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.204588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.204653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.204945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.205011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.205254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.205318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.205530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.205594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.205828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.205895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.206102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.206166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.206427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.206502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.206797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.206864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.207075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.207140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.207412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.207477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.207726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.207793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.208048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.208113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.929 [2024-11-17 18:56:18.208341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.929 [2024-11-17 18:56:18.208405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.929 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.208705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.208771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.209077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.209141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.209391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.209455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.209646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.209728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.210016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.210080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.210283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.210348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.210593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.210658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.210983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.211047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.211252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.211317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.211544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.211609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.211898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.211963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.212166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.212230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.212473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.212539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.212806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.212873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.213120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.213185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.213429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.213493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.213700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.213766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.213973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.214041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.214330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.214395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.214659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.214737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.214986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.215050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.215312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.215378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.215572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.215637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.215864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.215929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.216144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.216208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.216421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.216487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.216744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.216811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.217100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.217164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.217391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.217456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.217755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.217822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.218112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.218179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.218444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.218509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.218766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.218831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.219056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.219121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.219349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.219417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.219705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.219771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.220002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.220068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.220302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.930 [2024-11-17 18:56:18.220367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.930 qpair failed and we were unable to recover it. 00:35:31.930 [2024-11-17 18:56:18.220566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.220630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.220932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.220996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.221237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.221301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.221581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.221645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.221895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.221960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.222163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.222227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.222462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.222527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.222802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.222868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.223083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.223150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.223403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.223469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.223703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.223768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.223993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.224057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.224347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.224414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.224704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.224769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.225028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.225096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.225304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.225372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.225661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.225741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.225960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.226025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.226325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.226389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.226593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.226657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.226922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.226988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.227192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.227256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.227540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.227605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.227852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.227929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.228167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.228231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.228445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.228511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.228804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.228870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.229084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.229148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.229378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.229443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.229653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.229734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.229962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.230027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.230248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.230312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.230569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.230633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.230946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.231011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.231270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.231335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.231601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.231665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.231933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.231997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.232266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.931 [2024-11-17 18:56:18.232335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.931 qpair failed and we were unable to recover it. 00:35:31.931 [2024-11-17 18:56:18.232589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.232653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.232956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.233021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.233273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.233338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.233603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.233669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.233976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.234040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.234300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.234364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.234598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.234664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.234888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.234954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.235219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.235285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.235532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.235597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.235821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.235887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.236143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.236208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.236427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.236491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.236753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.236821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.237061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.237126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.237338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.237406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.237634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.237716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.237973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.238037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.238283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.238347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.238590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.238655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.238916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.238982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.239273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.239337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.239551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.239615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.239916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.239988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.240247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.240310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.240551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.240615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.240848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.240924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.241128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.241193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.241419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.241483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.241768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.241835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.242109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.242177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.242445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.242510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.242775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.242841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.243068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.243133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.243434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.243499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.243708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.243780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.244000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.244064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.244325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.932 [2024-11-17 18:56:18.244390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.932 qpair failed and we were unable to recover it. 00:35:31.932 [2024-11-17 18:56:18.244631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.244721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.244970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.245036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.245295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.245360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.245649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.245738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.245999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.246063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.246301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.246367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.246617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.246702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.246956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.247021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.247270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.247335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.247577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.247643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.247912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.247977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.248263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.248328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.248574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.248639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.248907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.248971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.249174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.249239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.249478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.249553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.249815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.249881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.250141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.250206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.250465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.250528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.250731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.250799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.251065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.251129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.251404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.251468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.251661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.251760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.252022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.252088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.252294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.252362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.252593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.252658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.252939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.253008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.253264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.253327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.253533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.253600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.253897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.253965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.254255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.254319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.254608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.254690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.254946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.255015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.933 [2024-11-17 18:56:18.255242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.933 [2024-11-17 18:56:18.255308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.933 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.255594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.255659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.255989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.256055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.256274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.256339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.256589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.256653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.256892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.256959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.257210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.257275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.257559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.257623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.257890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.257956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.258256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.258321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.258521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.258585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.258846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.258912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.259173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.259240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.259500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.259564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.259849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.259915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.260217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.260283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.260560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.260625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.260891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.260956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.261210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.261275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.261472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.261537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.261796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.261863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.262155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.262219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.262483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.262548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.262800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.262876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.263129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.263194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.263444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.263508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.263761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.263828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.264114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.264180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.264417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.264481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.264756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.264823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.265116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.265183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.265426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.265490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.265748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.265815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.266103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.266169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.266449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.266513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.266778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.266844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.267093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.267158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.267414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.267481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.934 qpair failed and we were unable to recover it. 00:35:31.934 [2024-11-17 18:56:18.267696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.934 [2024-11-17 18:56:18.267762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.267989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.268052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.268300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.268367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.268621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.268710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.268954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.269024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.269244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.269308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.269494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.269558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.269803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.269869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.270114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.270181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.270426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.270493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.270716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.270783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.271024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.271090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.271307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.271385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.271639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.271727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.271959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.272025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.272271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.272335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.272550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.272618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.272862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.272929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.273132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.273197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.273444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.273509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.273802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.273867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.274160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.274225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.274471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.274536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.274745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.274812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.275107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.275171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.275414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.275482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.275714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.275781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.276063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.276128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.276413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.276477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.276787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.276852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.277079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.277144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.277332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.277398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.277639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.277720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.277936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.278001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.278244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.278310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.278566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.278630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.278906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.278971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.279260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.279326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.935 [2024-11-17 18:56:18.279575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.935 [2024-11-17 18:56:18.279638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.935 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.279918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.279983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.280230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.280296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.280560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.280624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.280890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.280957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.281159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.281224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.281509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.281574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.281818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.281884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.282181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.282245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.282464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.282530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.282820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.282887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.283185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.283250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.283465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.283533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.283811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.283878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.284082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.284148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.284434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.284508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.284754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.284822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.285032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.285098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.285343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.285408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.285626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.285704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.285992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.286057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.286336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.286401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.286624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.286707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.286956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.287020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.287261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.287325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.287565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.287629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.287920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.287992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.288228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.288293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.288542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.288606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.288834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.288902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.289149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.289215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.289497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.289561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.289814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.289881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.290138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.290204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.290416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.290480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.290767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.290834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.291094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.291159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.291401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.291465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.291723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.936 [2024-11-17 18:56:18.291789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.936 qpair failed and we were unable to recover it. 00:35:31.936 [2024-11-17 18:56:18.292006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.292072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.292358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.292422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.292669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.292747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.293006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.293081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.293345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.293409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.293654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.293733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.293990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.294054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.294299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.294364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.294624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.294708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.294951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.295017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.295269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.295333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.295576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.295642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.295944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.296010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.296216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.296280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.296499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.296564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.296814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.296880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.297124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.297190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.297457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.297524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.297771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.297838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.298080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.298147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.298391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.298456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.298700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.298767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.299017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.299081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.299365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.299429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.299638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.299719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.299996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.300061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.300281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.300344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.300592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.300659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.300922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.300987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.301195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.301259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.301471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.301536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.301776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.301844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.302051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.302115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.302320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.302384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.302644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.302722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.302965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.303029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.303273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.303337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.303636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.937 [2024-11-17 18:56:18.303735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.937 qpair failed and we were unable to recover it. 00:35:31.937 [2024-11-17 18:56:18.303994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.304058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.304315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.304381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.304622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.304700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.304967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.305032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.305325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.305389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.305612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.305694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.305894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.305969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.306224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.306288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.306523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.306586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.306867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.306933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.307191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.307256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.307510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.307574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.307864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.307930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.308190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.308255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.308466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.308534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.308798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.308866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.309160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.309225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.309438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.309503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.309797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.309864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.310125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.310189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.310444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.310510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.310756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.310823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.311086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.311150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.311400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.311467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.311688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.311757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.311988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.312052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.312306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.312371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.312576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.312641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.312928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.312994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.313240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.313304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.313542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.313607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.313887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.313952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.314158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.314222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.938 [2024-11-17 18:56:18.314511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.938 [2024-11-17 18:56:18.314575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.938 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.314842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.314910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.315197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.315262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.315524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.315590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.315871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.315938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.316142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.316207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.316421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.316485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.316768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.316835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.317060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.317126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.317336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.317400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.317627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.317704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.317931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.317997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.318247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.318311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.318571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.318635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.318879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.318947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.319155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.319221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.319486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.319551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.319794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.319861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.320117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.320183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.320426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.320491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.320710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.320777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.321007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.321072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.321307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.321372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.321613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.321692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.321907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.321972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.322180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.322244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.322459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.322523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.322749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.322815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.323080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.323145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.323352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.323416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.323713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.323779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.324016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.324081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.324366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.324431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.324706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.324772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.325062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.325126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.325379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.325445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.325707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.325774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.325987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.326052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.326302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.939 [2024-11-17 18:56:18.326367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.939 qpair failed and we were unable to recover it. 00:35:31.939 [2024-11-17 18:56:18.326571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.326636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.326874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.326939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.327231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.327305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.327602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.327666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.327886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.327951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.328162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.328226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.328432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.328495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.328715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.328780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.328999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.329066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.329324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.329388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.329577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.329641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.329890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.329955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.330158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.330223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.330484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.330548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.330792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.330858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.331106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.331170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.331402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.331468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.331733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.331799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.332045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.332110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.332330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.332395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.332648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.332729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.333016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.333081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.333322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.333389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.333599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.333664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.333923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.333988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.334231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.334300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.334557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.334622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.334859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.334924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.335168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.335233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.335487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.335551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.335868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.335935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.336171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.336235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.336479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.336547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.336836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.336903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.337175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.337239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.337489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.337553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.337822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.337888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.338165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.940 [2024-11-17 18:56:18.338229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.940 qpair failed and we were unable to recover it. 00:35:31.940 [2024-11-17 18:56:18.338448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.338512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.338761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.338828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.339068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.339133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.339375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.339442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.339704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.339771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.340030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.340106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.340316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.340380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.340619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.340702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.340909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.340975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.341184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.341249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.341441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.341508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.341728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.341795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.342089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.342154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.342449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.342514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.342775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.342842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.343145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.343211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.343491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.343555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.343853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.343920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.344136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.344201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.344427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.344492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.344716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.344783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.344993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.345057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.345321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.345386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.345636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.345720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.345971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.346035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.346248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.346313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.346519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.346587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.346886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.346952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.347192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.347257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.347482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.347550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.347762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.347829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.348086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.348151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.348444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.348518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.348822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.348888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.349179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.349244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.349450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.349514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.349755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.349822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.350025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.350089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.941 qpair failed and we were unable to recover it. 00:35:31.941 [2024-11-17 18:56:18.350335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.941 [2024-11-17 18:56:18.350399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.350617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.350713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.350964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.351029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.351239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.351306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.351560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.351624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.351870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.351936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.352142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.352208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.352494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.352557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.352854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.352920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.353173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.353237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.353488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.353551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.353766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.353833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.354105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.354173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.354464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.354529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.354758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.354824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.355069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.355135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.355332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.355396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.355647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.355738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.355994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.356058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.356282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.356348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.356598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.356661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.356918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.356983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.357280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.357345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.357601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.357665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.357889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.357957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.358167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.358233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.358512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.358578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.358856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.358923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.359190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.359256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.359457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.359521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.359760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.359826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.360019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.360084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.360341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.360406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.360658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.360736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.360943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.361008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.361246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.361320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.361569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.361636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.361899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.942 [2024-11-17 18:56:18.361963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.942 qpair failed and we were unable to recover it. 00:35:31.942 [2024-11-17 18:56:18.362265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.362330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.362582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.362647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.362932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.362996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.363207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.363271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.363475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.363542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.363756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.363825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.364016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.364081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.364321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.364387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.364597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.364662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.364973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.365039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.365333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.365397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.365636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.365718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.366006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.366072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.366326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.366391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.366645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.366723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.366972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.367037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.367252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.367316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.367562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.367629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.367903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.367968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.368230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.368295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.368580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.368645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.368974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.369039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.369322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.369387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.369602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.369668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.369942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.370022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.370309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.370373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.370623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.370703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.370990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.371055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.371344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.371407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.371654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.371737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.372032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.372097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.372392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.372456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.372752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.943 [2024-11-17 18:56:18.372818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.943 qpair failed and we were unable to recover it. 00:35:31.943 [2024-11-17 18:56:18.373109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.373174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.373465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.373529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.373823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.373888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.374136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.374201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.374492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.374557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.374868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.374933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.375183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.375248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.375497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.375566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.375844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.375911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.376201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.376266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.376531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.376595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.376908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.376974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.377275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.377340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.377640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.377723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.378009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.378074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.378265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.378331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.378582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.378646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.378916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.378981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.379272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.379336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.379597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.379662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.379952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.380017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.380305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.380369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.380670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.380754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.381002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.381069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.381362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.381427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.381715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.381784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.382091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.382155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.382434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.382500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.382764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.382831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.383097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.383160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.383404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.383468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.383724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.383790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.384046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.384120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.384422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.384486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.384745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.384813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.385054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.385119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.385363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.385428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.385672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.385754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.944 qpair failed and we were unable to recover it. 00:35:31.944 [2024-11-17 18:56:18.386052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.944 [2024-11-17 18:56:18.386117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.386372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.386437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.386699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.386764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.386962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.387026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.387307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.387372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.387669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.387765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.388010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.388076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.388369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.388434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.388736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.388804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.389100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.389165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.389464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.389529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.389794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.389861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.390116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.390181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.390472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.390536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.390781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.390848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.391062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.391126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.391355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.391420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.391660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.391738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.392036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.392101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.392393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.392458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.392668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.392750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.392971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.393046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.393349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.393415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.393638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.393721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.394014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.394078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.394348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.394412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.394622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.394708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.394974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.395040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.395268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.395333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.395594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.395659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.395984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.396049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.396341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.396405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.396664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.396748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.397006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.397071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.397274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.397340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.397583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.397649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.397962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.398028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.398237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.398302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.945 qpair failed and we were unable to recover it. 00:35:31.945 [2024-11-17 18:56:18.398592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.945 [2024-11-17 18:56:18.398657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.398938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.399004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.399248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.399311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.399553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.399619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.399920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.399985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.400271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.400335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.400545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.400609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.400881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.400946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.401206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.401271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.401525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.401589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.401906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.401973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.402282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.402347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.402554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.402618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.402862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.402927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.403168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.403232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.403519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.403583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.403817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.403885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.404130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.404194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.404427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.404491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.404671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.404757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.405019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.405084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.405328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.405394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.405694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.405759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.406003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.406068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.406307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.406382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.406699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.406764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.407050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.407115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.407366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.407430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.407735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.407801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.408090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.408155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.408454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.408518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.408729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.408795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.409091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.409156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.409421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.409486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.409731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.409797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.410017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.410084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.410368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.410433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.410695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.410760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.946 qpair failed and we were unable to recover it. 00:35:31.946 [2024-11-17 18:56:18.411059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.946 [2024-11-17 18:56:18.411125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.411376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.411441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.411656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.411739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.412031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.412096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.412339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.412405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.412636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.412718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.413011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.413075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.413344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.413409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.413694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.413760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.414051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.414116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.414367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.414432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.414708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.414773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.415063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.415128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.415391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.415457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.415768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.415834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.416092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.416157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.416408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.416472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.416746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.416812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.417060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.417127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.417386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.417450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.417711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.417777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.418030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.418095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.418343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.418406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.418706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.418772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.419068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.419133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.419353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.419417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.419717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.419783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.420090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.420155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.420395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.420459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.420702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.420768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.421031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.421097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.421399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.421462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.421708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.421775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.422062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.947 [2024-11-17 18:56:18.422128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.947 qpair failed and we were unable to recover it. 00:35:31.947 [2024-11-17 18:56:18.422395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.422460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.422752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.422818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.423120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.423186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.423469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.423533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.423723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.423789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.424049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.424115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.424373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.424437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.424744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.424810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.425101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.425167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.425412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.425476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.425727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.425795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.426094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.426160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.426356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.426420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.426709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.426774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.427080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.427144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.427383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.427447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.427631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.427732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.427991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.428056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.428346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.428410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.428648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.428735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.428981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.429056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.429355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.429420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.429705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.429771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.430022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.430087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.430330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.430394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.430644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.430728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.430996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.431061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.431350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.431414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.431666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.431750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.432007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.432074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.432378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.432442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.432740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.432807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.433108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.433176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.433412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.433477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.433724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.433791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.434080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.434145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.434433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.434497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.434754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.948 [2024-11-17 18:56:18.434821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.948 qpair failed and we were unable to recover it. 00:35:31.948 [2024-11-17 18:56:18.435116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.435181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.435490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.435554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.435854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.435920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.436179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.436244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.436494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.436559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.436826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.436893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.437147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.437212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.437506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.437571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.437876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.437942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.438163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.438229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.438492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.438556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.438841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.438907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.439203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.439267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.439512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.439579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.439819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.439885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.440128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.440194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.440440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.440505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.440745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.440810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.441015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.441080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.441368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.441433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.441690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.441755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.441999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.442064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.442309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.442374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.442644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.442727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.443028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.443092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.443400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.443465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.443766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.443832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.444084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.444148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.444402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.444467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.444760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.444827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.445133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.445197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.445439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.445504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.445791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.445857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.446105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.446171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.446466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.446531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.446765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.446831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.447073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.447138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.447438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.447502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.949 qpair failed and we were unable to recover it. 00:35:31.949 [2024-11-17 18:56:18.447750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.949 [2024-11-17 18:56:18.447817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.448099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.448164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.448459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.448523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.448792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.448858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.449113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.449178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.449389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.449456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.449750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.449816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.450013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.450080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.450305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.450369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.450663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.450741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.450995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.451060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.451347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.451412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.451725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.451802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.451999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.452064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.452351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.452415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.452696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.452762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.453065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.453130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.453428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.453492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.453787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.453853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.454072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.454136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.454421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.454485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.454735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.454801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.455063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.455128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.455370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.455433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.455651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.455736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.455988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.456057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.456375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.456439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.456741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.456808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.457057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.457122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.457386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.457450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.457737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.457803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.458059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.458124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.458411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.458475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.458727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.458793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.458996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.459061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.459351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.459415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.459720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.459785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.460033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.460098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.950 [2024-11-17 18:56:18.460345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.950 [2024-11-17 18:56:18.460412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.950 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.460710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.460776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.461087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.461152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.461445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.461511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.461728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.461793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.462036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.462100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.462358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.462423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.462667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.462747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.463007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.463072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.463315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.463384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.463642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.463731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.463954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.464018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.464266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.464333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.464642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.464726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.464977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.465041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.465344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.465419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.465726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.465793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.466029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.466094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.466342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.466407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.466656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.466734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.466981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.467045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.467269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.467334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.467573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.467639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.467955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.468019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.468272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.468336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.468621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.468710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.469012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.469076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.469335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.469399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.469653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.469737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.469993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.470057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.470303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.470370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.470696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.470764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.471005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.471069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.471333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.471397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.471650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.471743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.472028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.472093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.472379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.472443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.472698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.472766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.473027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.951 [2024-11-17 18:56:18.473092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.951 qpair failed and we were unable to recover it. 00:35:31.951 [2024-11-17 18:56:18.473342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.473406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.473661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.473761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.474057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.474123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.474335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.474408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.474667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.474748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.474957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.475022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.475304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.475367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.475583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.475647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.475908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.475975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.476229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.476293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.476538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.476602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.476940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.477005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.477257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.477325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.477572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.477636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.477918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.477983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.478216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.478281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.478529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.478594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.478830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.478896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.479189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.479254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.479548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.479612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.479821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.479887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.480173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.480238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.480439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.480503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:31.952 [2024-11-17 18:56:18.480764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.952 [2024-11-17 18:56:18.480830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:31.952 qpair failed and we were unable to recover it. 00:35:32.229 [2024-11-17 18:56:18.481120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.229 [2024-11-17 18:56:18.481184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.229 qpair failed and we were unable to recover it. 00:35:32.229 [2024-11-17 18:56:18.481443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.229 [2024-11-17 18:56:18.481507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.229 qpair failed and we were unable to recover it. 00:35:32.229 [2024-11-17 18:56:18.481804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.229 [2024-11-17 18:56:18.481870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.229 qpair failed and we were unable to recover it. 00:35:32.229 [2024-11-17 18:56:18.482181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.229 [2024-11-17 18:56:18.482245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.229 qpair failed and we were unable to recover it. 00:35:32.229 [2024-11-17 18:56:18.482548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.229 [2024-11-17 18:56:18.482612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.229 qpair failed and we were unable to recover it. 00:35:32.229 [2024-11-17 18:56:18.482881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.229 [2024-11-17 18:56:18.482948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.229 qpair failed and we were unable to recover it. 00:35:32.229 [2024-11-17 18:56:18.483178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.229 [2024-11-17 18:56:18.483242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.229 qpair failed and we were unable to recover it. 00:35:32.229 [2024-11-17 18:56:18.483506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.483571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.483873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.483942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.484153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.484219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.484465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.484530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.484780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.484846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.485036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.485101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.485306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.485371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.485600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.485665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.485937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.486003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.486194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.486259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.486540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.486605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.486854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.486921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.487166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.487232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.487519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.487594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.487854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.487921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.488123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.488189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.488476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.488540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.488839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.488906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.489164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.489228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.489468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.489533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.489751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.489817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.490116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.490180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.490428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.490493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.490740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.490806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.491101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.491165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.491451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.491515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.491764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.491833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.492137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.492203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.492499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.492564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.492798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.492865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.493162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.493227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.493526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.493590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.493875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.493941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.494240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.494306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.494595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.494660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.494935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.495000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.495256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.495323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.495565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.495631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.230 qpair failed and we were unable to recover it. 00:35:32.230 [2024-11-17 18:56:18.495915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.230 [2024-11-17 18:56:18.495979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.496236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.496300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.496544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.496619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.496891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.496956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.497209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.497274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.497490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.497559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.497794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.497861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.498161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.498225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.498526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.498591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.498852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.498918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.499196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.499261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.499549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.499613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.499923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.499989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.500242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.500307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.500511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.500583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.500829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.500896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.501174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.501242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.501486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.501551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.501788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.501856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.502163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.502227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.502478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.502542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.502800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.502867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.503134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.503198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.503445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.503513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.503738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.503805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.504023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.504088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.504375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.504440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.504726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.504793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.505055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.505119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.505418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.505483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.505790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.505860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.506128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.506206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.506447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.506512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.506795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.506862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.507126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.507190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.507475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.507539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.507847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.507912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.508175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-11-17 18:56:18.508241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.231 qpair failed and we were unable to recover it. 00:35:32.231 [2024-11-17 18:56:18.508500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.508564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.508820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.508885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.509129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.509193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.509489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.509554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.509806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.509872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.510080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.510155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.510407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.510472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.510719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.510786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.511049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.511113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.511360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.511425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.511636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.511715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.512004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.512069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.512320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.512385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.512670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.512749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.513004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.513069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.513262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.513327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.513567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.513632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.513954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.514019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.514265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.514330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.514597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.514661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.514996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.515062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.515307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.515372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.515651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.515736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.516001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.516066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.516314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.516378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.516636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.516719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.516967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.517033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.517285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.517349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.517632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.517714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.518018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.518083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.518334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.518401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.518732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.518799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.519035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.519100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.519399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.519463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.519713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.519781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.519982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.520049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.520341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.520405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.520705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.520772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.232 [2024-11-17 18:56:18.521060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-11-17 18:56:18.521125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.232 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.521410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.521473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.521717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.521783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.522005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.522070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.522312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.522376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.522666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.522744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.523003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.523072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.523363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.523426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.523718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.523785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.524076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.524142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.524422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.524487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.524729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.524796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.525049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.525114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.525398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.525461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.525713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.525779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.526032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.526098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.526352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.526415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.526657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.526750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.527018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.527082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.527333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.527397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.527590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.527657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.527938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.528004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.528260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.528325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.528604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.528668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.528955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.529020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.529267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.529332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.529575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.529640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.529899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.529966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.530209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.530273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.530530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.530597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.530878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.530945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.531201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.531268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.531517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.531581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.531835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.531902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.532147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.532213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.532451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.532525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.532787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.532854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.533107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.533172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.533460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-11-17 18:56:18.533525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.233 qpair failed and we were unable to recover it. 00:35:32.233 [2024-11-17 18:56:18.533776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.533842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.534086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.534151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.534394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.534458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.534744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.534811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.535073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.535139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.535393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.535457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.535670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.535758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.536003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.536068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.536363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.536427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.536693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.536760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.537028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.537095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.537386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.537450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.537670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.537755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.537965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.538031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.538319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.538384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.538623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.538703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.538962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.539027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.539276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.539341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.539629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.539710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.539979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.540044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.540337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.540402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.540620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.540702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.540923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.540988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.541279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.541343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.541651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.541735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.541961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.542025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.542278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.542342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.542611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.542711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.234 [2024-11-17 18:56:18.542964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.234 [2024-11-17 18:56:18.543029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.234 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.543315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.543378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.543598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.543662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.543976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.544041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.544329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.544394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.544706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.544772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.544991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.545056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.545340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.545404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.545699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.545765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.545993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.546059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.546350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.546414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.546662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.546745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.546966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.547031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.547254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.547327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.547615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.547697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.547999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.548065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.548368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.548434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.548632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.548714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.548973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.549037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.549242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.549306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.549603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.549668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.549931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.549995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.550257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.550322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.550626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.550725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.550986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.551051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.551293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.551356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.551622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.551705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.552000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.552064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.552269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.552336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.552632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.552713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.552974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.553040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.553247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.553311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.553567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.553631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.553859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.553926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.554208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.554273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.554523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.554588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.554857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.554934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.555168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.555232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.235 [2024-11-17 18:56:18.555520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.235 [2024-11-17 18:56:18.555585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.235 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.555866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.555933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.556222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.556287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.556593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.556657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.556896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.556962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.557247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.557312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.557555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.557618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.557843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.557909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.558152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.558218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.558505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.558570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.558841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.558907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.559117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.559183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.559480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.559545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.559844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.559911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.560188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.560254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.560464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.560527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.560814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.560880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.561122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.561189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.561480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.561544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.561762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.561828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.562040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.562107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.562337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.562404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.562702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.562768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.563017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.563083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.563338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.563403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.563730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.563796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.564059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.564124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.564411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.564476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.564763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.564830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.565126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.565191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.565481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.565545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.565842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.565908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.566168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.566233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.566459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.566523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.566772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.566837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.567122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.567188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.567432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.567498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.567712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.567780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.236 [2024-11-17 18:56:18.568070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.236 [2024-11-17 18:56:18.568136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.236 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.568421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.568501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.568801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.568868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.569090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.569156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.569443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.569507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.569760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.569826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.570114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.570180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.570466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.570531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.570834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.570901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.571145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.571211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.571498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.571562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.571777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.571845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.572054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.572119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.572366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.572430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.572713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.572780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.573059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.573125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.573367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.573434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.573724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.573790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.574084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.574149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.574357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.574422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.574718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.574784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.574990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.575055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.575339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.575404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.575709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.575775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.576024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.576089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.576379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.576444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.576708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.576775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.577028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.577093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.577342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.577420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.577720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.577806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.578051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.578115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.578408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.578473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.578726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.578795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.579090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.579155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.579373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.579438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.579659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.579755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.580050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.580115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.580374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.580439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.580710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.237 [2024-11-17 18:56:18.580777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.237 qpair failed and we were unable to recover it. 00:35:32.237 [2024-11-17 18:56:18.581038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.581103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.581347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.581412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.581701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.581768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.582065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.582131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.582437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.582502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.582729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.582795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.583088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.583152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.583407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.583472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.583761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.583828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.584118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.584182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.584419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.584484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.584725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.584792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.584996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.585060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.585297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.585362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.585625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.585703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.585989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.586052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.586304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.586369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.586587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.586653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.586924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.586988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.587240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.587307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.587605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.587671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.587959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.588026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.588303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.588366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.588624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.588708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.588997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.589063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.589337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.589402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.589704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.589771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.590054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.590120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.590319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.590383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.590588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.590655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.590937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.591013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.591280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.591345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.591589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.591654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.591959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.592025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.592273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.592336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.592564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.592628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.592883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.592949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.593162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.593228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.238 qpair failed and we were unable to recover it. 00:35:32.238 [2024-11-17 18:56:18.593465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.238 [2024-11-17 18:56:18.593530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.593783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.593852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.594120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.594185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.594387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.594453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.594715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.594781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.594972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.595037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.595268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.595332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.595621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.595714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.595975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.596041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.596253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.596318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.596517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.596583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.596908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.596973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.597248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.597313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.597554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.597618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.597868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.597934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.598179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.598244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.598490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.598554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.598782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.598848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.599104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.599169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.599424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.599498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.599748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.599814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.600071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.600136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.600357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.600421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.600667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.600744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.600954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.601021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.601330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.601394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.601603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.601668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.601920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.601984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.602247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.602311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.602553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.602618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.602898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.602965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.603263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.603326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.603585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.239 [2024-11-17 18:56:18.603650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.239 qpair failed and we were unable to recover it. 00:35:32.239 [2024-11-17 18:56:18.603944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.604011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.604225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.604289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.604575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.604641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.604858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.604923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.605166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.605230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.605492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.605556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.605845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.605913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.606153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.606218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.606455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.606520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.606766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.606833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.607035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.607099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.607348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.607412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.607625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.607704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.607960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.608025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.608255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.608323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.608517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.608581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.608840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.608907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.609170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.609236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.609524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.609588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.609959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.610025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.610276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.610341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.610629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.610716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.610978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.611043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.611334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.611399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.611623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.611718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.611938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.612006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.612214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.612279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.612536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.612610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.612844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.612911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.613123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.613187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.613396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.613461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.613667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.613755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.613985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.614049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.614331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.614397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.614641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.614724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.614972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.615038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.615288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.615353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.615561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.615627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.240 qpair failed and we were unable to recover it. 00:35:32.240 [2024-11-17 18:56:18.615894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.240 [2024-11-17 18:56:18.615959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.616257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.616322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.616570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.616636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.616881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.616947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.617133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.617200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.617483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.617547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.617810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.617877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.618090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.618154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.618360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.618424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.618657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.618753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.619005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.619070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.619326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.619392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.619647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.619734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.619985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.620052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.620262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.620326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.620574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.620639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.620918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.620994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.621185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.621251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.621506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.621571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.621839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.621904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.622162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.622227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.622446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.622510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.622771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.622838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.623039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.623108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.623365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.623429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.623717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.623782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.624023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.624088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.624383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.624448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.624700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.624766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.625053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.625117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.625337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.625402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.625700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.625766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.626012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.626077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.626296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.626360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.626559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.626624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.626944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.627009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.627211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.627277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.627516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.627579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.241 [2024-11-17 18:56:18.627883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.241 [2024-11-17 18:56:18.627948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.241 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.628213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.628278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.628521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.628584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.628817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.628882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.629125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.629190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.629446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.629509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.629739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.629806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.630023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.630088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.630337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.630401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.630653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.630731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.630979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.631045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.631334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.631399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.631651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.631728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.631974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.632038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.632244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.632308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.632554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.632619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.632874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.632940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.633186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.633251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.633507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.633571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.633851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.633933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.634160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.634228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.634483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.634548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.634836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.634902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.635156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.635220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.635411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.635475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.635715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.635782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.635974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.636041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.636314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.636378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.636573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.636638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.636899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.636964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.637245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.637310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.637568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.637632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.637889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.637954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.638211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.638276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.638573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.638637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.638902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.638967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.639196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.639260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.639509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.639574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.242 qpair failed and we were unable to recover it. 00:35:32.242 [2024-11-17 18:56:18.639800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.242 [2024-11-17 18:56:18.639866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.640083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.640148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.640398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.640465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.640706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.640775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.641021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.641090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.641337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.641401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.641706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.641774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.642027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.642093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.642291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.642356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.642655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.642738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.642958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.643021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.643235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.643300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.643517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.643584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.643845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.643911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.644197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.644262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.644533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.644597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.644856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.644921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.645206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.645271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.645524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.645588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.645810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.645878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.646124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.646189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.646446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.646512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.646762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.646829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.647050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.647113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.647358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.647423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.647685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.647751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.648011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.648075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.648279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.648343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.648598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.648663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.648922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.648987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.649231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.649296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.649517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.649581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.649832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.649898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.650132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.650196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.243 qpair failed and we were unable to recover it. 00:35:32.243 [2024-11-17 18:56:18.650452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.243 [2024-11-17 18:56:18.650518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.650785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.650851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.651103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.651169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.651390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.651454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.651701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.651766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.652010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.652077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.652297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.652363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.652560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.652624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.652869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.652934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.653223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.653287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.653510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.653573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.653814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.653880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.654166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.654230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.654472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.654536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.654766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.654832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.655136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.655211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.655502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.655568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.655833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.655899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.656144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.656209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.656481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.656546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.656763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.656830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.657048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.657113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.657399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.657464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.657727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.657792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.658039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.658105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.658318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.658383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.658600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.658664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.658892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.658960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.659226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.659292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.659590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.659655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.659924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.659989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.660245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.660311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.660557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.660621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.660938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.661004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.661256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.661321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.661573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.661637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.661909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.661974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.662237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.244 [2024-11-17 18:56:18.662301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.244 qpair failed and we were unable to recover it. 00:35:32.244 [2024-11-17 18:56:18.662554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.662619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.662891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.662956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.663257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.663320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.663565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.663629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.663900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.663966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.664265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.664330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.664591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.664654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.664918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.664983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.665206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.665270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.665521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.665588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.665865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.665932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.666145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.666211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.666469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.666533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.666768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.666834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.667088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.667153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.667395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.667460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.667708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.667776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.667996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.668061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.668364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.668439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.668644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.668723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.668978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.669044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.669332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.669397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.669639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.669730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.669976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.670041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.670290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.670355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.670617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.670700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.670920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.670986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.671190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.671255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.671545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.671610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.671890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.671957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.672192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.672256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.672495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.672560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.672814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.672881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.673133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.673200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.673444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.673509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.673795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.673862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.674123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.674187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.674442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.674507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.245 qpair failed and we were unable to recover it. 00:35:32.245 [2024-11-17 18:56:18.674758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.245 [2024-11-17 18:56:18.674827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.675115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.675180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.675462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.675527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.675810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.675877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.676097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.676162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.676379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.676444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.676704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.676771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.677025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.677099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.677354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.677419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.677610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.677708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.677941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.678006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.678267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.678333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.678574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.678640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.678955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.679021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.679233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.679298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.679590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.679654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.679925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.679992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.680199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.680265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.680493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.680557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.680830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.680897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.681178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.681243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.681495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.681559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.681873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.681939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.682161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.682226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.682438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.682505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.682764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.682831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.683074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.683141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.683448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.683512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.683757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.683823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.684067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.684132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.684386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.684450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.684741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.684808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.685004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.685069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.685269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.685333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.685531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.685595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.685839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.685905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.686157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.686221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.686500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.686564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.246 [2024-11-17 18:56:18.686842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.246 [2024-11-17 18:56:18.686908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.246 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.687161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.687226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.687492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.687556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.687799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.687865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.688120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.688185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.688432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.688496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.688716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.688784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.688997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.689062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.689252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.689318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.689534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.689602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.689890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.689966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.690223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.690288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.690542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.690607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.690829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.690898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.691119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.691185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.691437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.691503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.691724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.691790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.692049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.692114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.692386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.692451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.692658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.692740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.692955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.693020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.693213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.693281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.693526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.693591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.693911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.693978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.694239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.694304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.694509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.694574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.694813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.694879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.695131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.695198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.695412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.695478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.695741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.695808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.696046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.696110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.696399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.696464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.696718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.696785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.697036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.697100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.697382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.697447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.697662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.697741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.697995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.698058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.698347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.698428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.698706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.247 [2024-11-17 18:56:18.698773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.247 qpair failed and we were unable to recover it. 00:35:32.247 [2024-11-17 18:56:18.698997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.699061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.699294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.699358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.699549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.699613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.699835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.699903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.700106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.700173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.700467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.700531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.700813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.700879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.701163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.701228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.701513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.701577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.701840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.701906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.702196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.702260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.702510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.702576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.702863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.702929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.703182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.703247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.703488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.703553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.703748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.703814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.704014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.704082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.704340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.704406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.704647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.704729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.704930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.704995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.705206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.705271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.705552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.705615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.705882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.705948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.706172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.706238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.706479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.706543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.706788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.706856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.707104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.707169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.707423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.707484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.707700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.707763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.707988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.708052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.708345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.708408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.708667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.708742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.708966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.709028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.248 [2024-11-17 18:56:18.709275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.248 [2024-11-17 18:56:18.709337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.248 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.709564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.709625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.709935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.710028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.710331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.710397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.710604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.710667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.710927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.710988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.711256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.711335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.711563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.711628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.711895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.711957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.712208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.712274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.712528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.712592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.712889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.712978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.713235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.713299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.713539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.713603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.713838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.713905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.714175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.714239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.714543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.714613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.714897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.714964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.715210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.715275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.715539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.715605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.715897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.715999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.716272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.716339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.716543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.716633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.716881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.716947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.717244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.717310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.717570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.717637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.717958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.718057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.718302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.718374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.718671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.718753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.718988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.719057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.719353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.719418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.719692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.719759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.719958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.720023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.720296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.720362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.720603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.720668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.720929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.720994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.721278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.721343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.721555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.721623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.249 qpair failed and we were unable to recover it. 00:35:32.249 [2024-11-17 18:56:18.721856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.249 [2024-11-17 18:56:18.721928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.722150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.722216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.722508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.722576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.722814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.722886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.723144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.723213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.723455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.723520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.723814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.723884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.724144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.724210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.724522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.724601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.724836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.724904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.725152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.725218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.725454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.725518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.725796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.725864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.726115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.726184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.726464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.726530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.726757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.726825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.727026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.727092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.727331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.727399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.727702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.727769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.727978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.728044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.728256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.728323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.728534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.728599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.728854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.728924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.729120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.729188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.729412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.729481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.729772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.729840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.730060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.730125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.730355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.730420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.730717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.730787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.731036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.731100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.731387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.731454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.731709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.731777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.732000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.732098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.732420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.732487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.732785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.732852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.733160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.733227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.733481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.733547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.250 [2024-11-17 18:56:18.733826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.250 [2024-11-17 18:56:18.733895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.250 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.734162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.734230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.734520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.734585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.734827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.734897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.735154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.735220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.735478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.735547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.735802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.735869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.736079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.736145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.736413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.736479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.736704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.736772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.737053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.737119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.737362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.737439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.737719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.737786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.738082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.738147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.738422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.738489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.738763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.738831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.739079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.739144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.739433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.739499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.739746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.739813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.740077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.740167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.740420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.740485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.740747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.740814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.741017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.741084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.741325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.741390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.741639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.741720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.742031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.742120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.742424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.742489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.742785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.742853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.743142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.743208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.743492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.743559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.743841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.743908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.744153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.744219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.744509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.744574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.744799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.744904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.745138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.745204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.745502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.745569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.745864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.745934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.746191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.746257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.251 qpair failed and we were unable to recover it. 00:35:32.251 [2024-11-17 18:56:18.746469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.251 [2024-11-17 18:56:18.746571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.746844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.746938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.747146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.747212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.747461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.747526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.747757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.747824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.748086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.748151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.748416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.748483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.748705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.748774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.749063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.749128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.749365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.749438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.749649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.749728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.749970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.750062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.750363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.750429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.750647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.750739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.750961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.751027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.751320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.751420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.751669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.751766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.752002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.752067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.752329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.752395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.752601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.752667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.752909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.752976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.753226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.753293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.753585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.753650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.753920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.753986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.754197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.754262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.754529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.754595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.754909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.754975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.755264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.755332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.755585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.755650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.755978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.756044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.756288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.756377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.756693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.756761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.757019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.757084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.757343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.252 [2024-11-17 18:56:18.757409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.252 qpair failed and we were unable to recover it. 00:35:32.252 [2024-11-17 18:56:18.757622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.757709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.758046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.758113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.758364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.758432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.758697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.758765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.759063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.759127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.759362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.759429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.759662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.759775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.760033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.760098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.760381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.760447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.760664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.760751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.761023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.761089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.761338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.761403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.761618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.761704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.761954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.762019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.762222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.762287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.762571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.762640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.762973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.763040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.763263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.763330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.763624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.763705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.763988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.764055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.764355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.764445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.764747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.764814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.765108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.765175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.765395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.765462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.765749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.765849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.766140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.766207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.766460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.766550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.766800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.766867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.767120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.767188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.767476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.767564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.767845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.767913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.768202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.768268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.768561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.768626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.768898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.768963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.769278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.769347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.769636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.769719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.769930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.253 [2024-11-17 18:56:18.769995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.253 qpair failed and we were unable to recover it. 00:35:32.253 [2024-11-17 18:56:18.770274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.770342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.770664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.770746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.770970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.771035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.771289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.771355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.771593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.771716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.771986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.772052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.772266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.772332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.772624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.772709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.772927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.772992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.773216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.773318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.773585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.773653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.773917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.774006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.774264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.774330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.774540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.774639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.774953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.775019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.775275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.775365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.775705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.775773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.776025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.776090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.776387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.776471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.776741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.776809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.777070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.777137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.777347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.777414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.777706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.777774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.778120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.778186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.778416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.778482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.778772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.778839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.779134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.779198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.779412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.779512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.779809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.779878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.780133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.780200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.780505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.780571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.780828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.780894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.781149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.781239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.781496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.781563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.781793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.781879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.782110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.782175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.782489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.782555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.254 qpair failed and we were unable to recover it. 00:35:32.254 [2024-11-17 18:56:18.782843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.254 [2024-11-17 18:56:18.782911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.783134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.783199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.783456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.783524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.783822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.783890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.784136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.784202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.784435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.784500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.784743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.784832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.785105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.785173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.785395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.785461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.785721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.785806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.786037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.786103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.786356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.786444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.786716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.786819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.787110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.787178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.787479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.787543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.787774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.787844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.788101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.788170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.788426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.788491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.788761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.788829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.255 [2024-11-17 18:56:18.789036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.255 [2024-11-17 18:56:18.789104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.255 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.789368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.789433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.789646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.789725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.789950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.790015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.790315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.790413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.790700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.790768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.790991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.791063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.791331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.791405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.791605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.791671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.791906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.791996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.792245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.792310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.792585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.792650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.792958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.793026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.793234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.793307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.793603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.793669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.793988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.794054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.794385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.794453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.794711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.794779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.795042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.795109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.795405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.795471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.795711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.795802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.539 qpair failed and we were unable to recover it. 00:35:32.539 [2024-11-17 18:56:18.796075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.539 [2024-11-17 18:56:18.796141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.796442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.796533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.796827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.796894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.797145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.797211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.797529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.797597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.797858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.797928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.798160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.798225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.798512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.798579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.798847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.798937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.799197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.799262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.799566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.799633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.799943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.800009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.800235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.800318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.800656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.800740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.800993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.801059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.801319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.801384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.801603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.801670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.801965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.802066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.802294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.802360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.802619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.802703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.802978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.803043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.803247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.803312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.803546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.803610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.803949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.804015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.804296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.804363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.804571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.804638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.804981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.805048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.805309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.805377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.805710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.805777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.806054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.806120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.806344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.806411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.806642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.806719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.806983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.807052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.807256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.807326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.807562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.807629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.807908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.807974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.808190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.808254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.540 [2024-11-17 18:56:18.808479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.540 [2024-11-17 18:56:18.808580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.540 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.808882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.808949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.809168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.809233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.809492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.809561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.809842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.809910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.810183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.810250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.810502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.810570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.810910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.810978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.811221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.811286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.811507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.811606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.811916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.811984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.812255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.812345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.812607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.812732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.812982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.813068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.813288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.813354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.813588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.813664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.813992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.814057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.814346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.814412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.814690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.814760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.815026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.815090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.815390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.815455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.815759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.815826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.816080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.816144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.816442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.816507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.816763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.816830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.817080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.817147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.817395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.817460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.817702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.817769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.817978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.818042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.818257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.818322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.818614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.818691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.818925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.818990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.819189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.819257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.819552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.819616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.819926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.819992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.541 [2024-11-17 18:56:18.820252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.541 [2024-11-17 18:56:18.820317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.541 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.820563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.820628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.820889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.820954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.821162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.821229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.821490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.821555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.821851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.821917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.822209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.822274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.822572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.822637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.822860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.822925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.823175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.823239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.823529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.823594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.823896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.823961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.824188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.824253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.824503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.824568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.824887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.824954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.825158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.825226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.825524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.825588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.825868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.825935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.826218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.826283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.826499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.826563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.826818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.826897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.827190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.827257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.827502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.827568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.827830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.827898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.828201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.828267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.828516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.828580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.828824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.542 [2024-11-17 18:56:18.828890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.542 qpair failed and we were unable to recover it. 00:35:32.542 [2024-11-17 18:56:18.829146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.829212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.829475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.829539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.829791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.829858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.830105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.830172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.830474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.830541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.830830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.830896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.831148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.831212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.831513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.831578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.831891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.831959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.832191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.832255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.832503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.832569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.832854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.832924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.833233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.833302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 893359 Killed "${NVMF_APP[@]}" "$@" 00:35:32.543 [2024-11-17 18:56:18.833550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.833616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.833861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.833928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.834169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:32.543 [2024-11-17 18:56:18.834235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.834488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:32.543 [2024-11-17 18:56:18.834554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.834820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.834887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:32.543 [2024-11-17 18:56:18.835196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.835261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.835460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.543 [2024-11-17 18:56:18.835528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.835785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.835853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.836065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.836131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.836393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.836459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.836748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.836814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.837062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.543 [2024-11-17 18:56:18.837128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.543 qpair failed and we were unable to recover it. 00:35:32.543 [2024-11-17 18:56:18.837421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.837486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.837745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.837814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.838040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.838108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.838364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.838429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.838709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.838775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.838979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.839048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.839314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.839378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.839668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.839745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.839947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.840015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.840261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.840325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.840627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.840710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.840968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.841036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=893909 00:35:32.544 [2024-11-17 18:56:18.841281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:32.544 [2024-11-17 18:56:18.841345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 893909 00:35:32.544 [2024-11-17 18:56:18.841603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 893909 ']' 00:35:32.544 [2024-11-17 18:56:18.841667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.544 [2024-11-17 18:56:18.841983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.544 [2024-11-17 18:56:18.842046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.544 [2024-11-17 18:56:18.842294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.544 [2024-11-17 18:56:18.842361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 18:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.544 [2024-11-17 18:56:18.842607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.842693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.544 [2024-11-17 18:56:18.842964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.544 [2024-11-17 18:56:18.843031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.544 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.843282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.843347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.843549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.843618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.843864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.843969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.844252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.844320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.844594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.844662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.844898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.844964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.845261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.845326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.845574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.845642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.845945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.846011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.846235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.846325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.846593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.846660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.846901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.846965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.847267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.847355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.847600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.847667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.847946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.848036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.848302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.848367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.848631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.848714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.848973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.849061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.849321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.849386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.849692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.849762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.850011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.850075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.850339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.850403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.850669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.850765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.851026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.851091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.851401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.851467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.851718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.851786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.851993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.852058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.852303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.852369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.852613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.852690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.852933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.853000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.853253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.853317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.853619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.853698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.545 [2024-11-17 18:56:18.853990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.545 [2024-11-17 18:56:18.854059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.545 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.854276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.854343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.854639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.854720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.855010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.855076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.855299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.855374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.855733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.855802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.856082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.856149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.856369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.856436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.856696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.856762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.857023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.857090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.857419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.857487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.857704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.857796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.858015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.858081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.858373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.858437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.858635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.858737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.859096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.859196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.859422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.859495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.859739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.859811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.860080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.860146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.860396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.860461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.860696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.860764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.860963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.861028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.861233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.861299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.861574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.861640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.861887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.861956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.862181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.862247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.862541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.862607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.546 [2024-11-17 18:56:18.862921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.546 [2024-11-17 18:56:18.862990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.546 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.863257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.863322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.863516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.863581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.863813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.863880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.864135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.864202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.864446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.864511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.864731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.864799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.865039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.865105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.865308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.865374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.865629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.865709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.865933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.865999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.866203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.866270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.866487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.866551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.866796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.866867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.867141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.867206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.867470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.867536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.867793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.867861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.868115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.868191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.868484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.868550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.868781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.868848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.869107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.869172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.869371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.869439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.869702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.869769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.869980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.870045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.870291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.870356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.870577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.870644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.870892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.870958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.547 [2024-11-17 18:56:18.871209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.547 [2024-11-17 18:56:18.871276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.547 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.871541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.871606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.871846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.871916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.872129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.872199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.872507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.872576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.872884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.872952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.873205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.873270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.873518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.873583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.873858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.873928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.874218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.874283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.874537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.874603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.874926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.874994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.875291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.875356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.875608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.875690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.875956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.876023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.876242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.876307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.876521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.876586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.876809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.876878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.877158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.877224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.877479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.877544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.877849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.877916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.878121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.878188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.878436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.878501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.878805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.878871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.879121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.879187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.879389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.879455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.879758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.879824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.880072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.880140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.880429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.880496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.880751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.880817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.881080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.548 [2024-11-17 18:56:18.881156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.548 qpair failed and we were unable to recover it. 00:35:32.548 [2024-11-17 18:56:18.881370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.881436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.881690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.881757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.882006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.882072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.882361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.882427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.882644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.882724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.882980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.883046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.883337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.883402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.883647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.883727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.884017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.884083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.884336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.884401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.884637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.884717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.884973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.885038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.885323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.885387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.885649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.885730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.885950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.886018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.886280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.886344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.886613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.886708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.887001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.887066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.887320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.887389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.887632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.887713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.887974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.888039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.888230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.888299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.888588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.888654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.888916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.888981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.889224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.889289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.889532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.889598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.889924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.889993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.890286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.890352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.890599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.890665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.890991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.549 [2024-11-17 18:56:18.891057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.549 qpair failed and we were unable to recover it. 00:35:32.549 [2024-11-17 18:56:18.891305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.891370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.891662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.891740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.892033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.892099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.892363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.892428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.892733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.892799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.893038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.893105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 [2024-11-17 18:56:18.893113] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.893204] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.550 [2024-11-17 18:56:18.893379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.893443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.893695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.893759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.894051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.894131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.894383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.894447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.894728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.894793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.895047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.895114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.895408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.895474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.895728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.895794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.896092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.896157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.896367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.896437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.896722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.896790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.897044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.897109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.897351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.897417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.897707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.897773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.898014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.898080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.898285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.550 [2024-11-17 18:56:18.898359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.550 qpair failed and we were unable to recover it. 00:35:32.550 [2024-11-17 18:56:18.898663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.898744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.898991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.899056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.899298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.899365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.899610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.899694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.899984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.900049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.900245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.900310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.900516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.900581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.900857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.900925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.901214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.901279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.901534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.901599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.901876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.901945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.902249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.902314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.902603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.902668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.902969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.903035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.903263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.903329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.903629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.903707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.903980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.904046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.904359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.904426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.904702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.904769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.905039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.905106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.905365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.905429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.905697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.905764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.906012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.906081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.906384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.906450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.906701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.906768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.907019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.907084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.907385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.907461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.551 [2024-11-17 18:56:18.907758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.551 [2024-11-17 18:56:18.907825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.551 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.908029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.908094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.908379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.908445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.908740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.908806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.909053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.909117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.909375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.909441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.909701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.909767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.910031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.910096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.910394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.910458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.910711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.910778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.911062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.911127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.911391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.911456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.911702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.911771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.912028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.912093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.912288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.912353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.912613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.912707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.912960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.913025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.913270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.913334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.913617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.913697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.913991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.914056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.914347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.914412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.914700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.914767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.915061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.915127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.915377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.915442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.915692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.915760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.916033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.916099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.916329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.916395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.916659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.916738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.552 [2024-11-17 18:56:18.916946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.552 [2024-11-17 18:56:18.917013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.552 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.917318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.917383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.917637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.917716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.917924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.917988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.918248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.918313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.918560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.918625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.918927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.918992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.919255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.919321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.919603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.919669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.919892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.919958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.920200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.920265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.920512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.920587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.920914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.920982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.921278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.921344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.921633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.921721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.922021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.922087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.922312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.922377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.922659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.922742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.923035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.923102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.923289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.923357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.923615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.923698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.923960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.924029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.924329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.924396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.924645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.924744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.924986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.925052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.925322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.925388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.925696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.925762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.926055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.926120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.926372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.926439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.926696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.926763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.927002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.553 [2024-11-17 18:56:18.927069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.553 qpair failed and we were unable to recover it. 00:35:32.553 [2024-11-17 18:56:18.927365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.927431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.927693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.927762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.928006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.928074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.928329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.928394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.928641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.928729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.928984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.929049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.929293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.929357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.929597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.929669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.929934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.930005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.930295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.930359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.930625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.930710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.930981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.931049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.931343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.931408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.931656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.931740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.931963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.932029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.932227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.932253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.932340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.932366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.932516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.932544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.932635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.932662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.932771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.932798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.932923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.932954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.933076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.933101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.933220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.933246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.933360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.933386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.933465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.933490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.933605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.933633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.933785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.933813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.933923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.933949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.934065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.934091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.934230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.934257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.934365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.934391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.934479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.934505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.934649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.934682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.934801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.934829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.934918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.934944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.935083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.935109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.935224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.554 [2024-11-17 18:56:18.935251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.554 qpair failed and we were unable to recover it. 00:35:32.554 [2024-11-17 18:56:18.935371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.935397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.935517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.935542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.935656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.935692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.935833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.935859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.935977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.936002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.936088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.936114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.936202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.936228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.936365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.936390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.936505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.936530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.936647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.936684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.936771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.936798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.936883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.936909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.936995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.937022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.937107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.937133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.937252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.937278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.937396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.937421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.937502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.937528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.937615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.937640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.937733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.937760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.937875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.937900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.938003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.938029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.938149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.938175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.938284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.938310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.938401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.938431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.938516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.938543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.938689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.938715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.938802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.938828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.938914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.938939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.939047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.939073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.939199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.939224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.939364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.939390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.939506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.939530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.939626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.939651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.939765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.939791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.939906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.939932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.940018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.940045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.940141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.940166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.940313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.555 [2024-11-17 18:56:18.940339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.555 qpair failed and we were unable to recover it. 00:35:32.555 [2024-11-17 18:56:18.940430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.940455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.940567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.940593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.940720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.940746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.940863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.940890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.941009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.941035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.941150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.941176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.941265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.941290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.941370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.941397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.941533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.941558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.941642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.941668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.941791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.941817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.941904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.941930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.942024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.942051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.942171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.942197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.942287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.942313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.942397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.942422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.942510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.942535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.942617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.942642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.942736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.942762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.942880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.942905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.943031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.943057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.943140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.943166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.943277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.943304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.943412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.943437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.943521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.943546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.943640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.943688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.943796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.943835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.943960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.943988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.944071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.944098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.944212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.944239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.944331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.944358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.944468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.556 [2024-11-17 18:56:18.944494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.556 qpair failed and we were unable to recover it. 00:35:32.556 [2024-11-17 18:56:18.944576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.944603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.944704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.944731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.944868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.944894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.944985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.945011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.945091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.945117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.945203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.945228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.945320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.945348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.945461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.945486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.945632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.945658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.945780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.945805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.945924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.945949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.946067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.946093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.946180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.946207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.946328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.946354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.946470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.946496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.946581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.946607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.946701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.946727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.946809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.946835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.946962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.946987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.947063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.947089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.947183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.947209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.947286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.947312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.947424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.947450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.947559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.947585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.947700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.947728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.947819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.947844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.947936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.947963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.948053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.948079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.948168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.948194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.557 qpair failed and we were unable to recover it. 00:35:32.557 [2024-11-17 18:56:18.948284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.557 [2024-11-17 18:56:18.948309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.948422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.948447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.948564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.948590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.948726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.948765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.948898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.948931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.949019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.949046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.949130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.949157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.949272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.949298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.949387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.949414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.949552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.949580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.949699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.949728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.949820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.949845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.949962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.949988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.950100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.950126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.950235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.950260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.950344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.950371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.950481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.950507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.950599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.950625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.950726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.950754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.950847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.950873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.950962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.950990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.951070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.951096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.951211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.951239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.951394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.951420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.951557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.951582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.951669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.951701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.951793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.951820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.951908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.951934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.952018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.952043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.558 qpair failed and we were unable to recover it. 00:35:32.558 [2024-11-17 18:56:18.952128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.558 [2024-11-17 18:56:18.952154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.952243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.952269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.952384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.952414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.952527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.952554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.952663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.952698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.952816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.952842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.952925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.952951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.953033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.953059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.953170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.953195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.953316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.953342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.953461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.953488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.953603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.953628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.953776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.953802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.953886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.953912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.954003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.954028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.954116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.954141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.954258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.954284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.954393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.954420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.954505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.954530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.954620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.954646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.954806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.954845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.954965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.954992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.955082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.955108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.955252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.955278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.955398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.955424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.955532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.955558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.955644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.955671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.955767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.955793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.955900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.559 [2024-11-17 18:56:18.955926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.559 qpair failed and we were unable to recover it. 00:35:32.559 [2024-11-17 18:56:18.956013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.956038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.956158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.956185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.956300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.956325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.956440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.956467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.956577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.956603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.956707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.956746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.956873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.956901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.956993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.957019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.957102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.957127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.957217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.957242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.957385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.957411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.957546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.957571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.957657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.957693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.957835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.957870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.957990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.958017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.958106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.958133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.958248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.958274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.958364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.958393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.958489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.958516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.958598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.958623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.958738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.958764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.958885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.958910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.959015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.959042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.959127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.959153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.959267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.959295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.959379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.959405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.959523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.959549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.959636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.959662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.959754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.959780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.560 qpair failed and we were unable to recover it. 00:35:32.560 [2024-11-17 18:56:18.959895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.560 [2024-11-17 18:56:18.959921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.960007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.960034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.960144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.960171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.960313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.960339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.960415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.960441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.960524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.960550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.960632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.960658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.960757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.960786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.960879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.960905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.961020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.961046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.961127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.961153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.961291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.961323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.961416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.961441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.961558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.961585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.961725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.961766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.961865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.961893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.962014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.962041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.962193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.962219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.962372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.962398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.962486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.962514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.962629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.962656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.962789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.962827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.962922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.962950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.963033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.963059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.963171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.963196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.963317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.963344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.561 qpair failed and we were unable to recover it. 00:35:32.561 [2024-11-17 18:56:18.963434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.561 [2024-11-17 18:56:18.963461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.963598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.963624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.963728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.963757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.963844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.963871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.963954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.963981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.964089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.964116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.964226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.964252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.964348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.964377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.964467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.964494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.964628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.964669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.964773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.964801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.964911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.964937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.965093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.965120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.965203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.965231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.965325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.965353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.965473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.965498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.965619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.965646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.965738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.965765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.965881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.965907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.966007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.966033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.966139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.966165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.966308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.966332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.966443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.966469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.562 [2024-11-17 18:56:18.966579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.562 [2024-11-17 18:56:18.966605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.562 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.966709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.966748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.966840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.966873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.966999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.967028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.967143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.967171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.967254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.967281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.967375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.967401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.967482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.967508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.967591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.967617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.967715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.967741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.967830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.967856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.967974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.967999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.968076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.968102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.968222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.968251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.968341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.968367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.968484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.968510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.968610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.968637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.968759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.968797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.968892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.968920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.969041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.969068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.969210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.969236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.969319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.969345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.969463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.969491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.969577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.969606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.969756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.969784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.969896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.969923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.970011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.970037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.970150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.970177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.970284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.970318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.970435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.970465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.970570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.970610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.970727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.970755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.970858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.970884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.971004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.971030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.563 [2024-11-17 18:56:18.971111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.563 [2024-11-17 18:56:18.971146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.563 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.971267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.971306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.971428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.971455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.971544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.971570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.971687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.971713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.971792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.971819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.971933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.971958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.972086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.972111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.972228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:32.564 [2024-11-17 18:56:18.972236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.972271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.972377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.972402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.972514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.972539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.972662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.972702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.972797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.972822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.972916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.972942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.973065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.973091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.973182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.973208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.973285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.973311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.973396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.973422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.973558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.973583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.973685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.973714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.973828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.973867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.973964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.973991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.974118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.974144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.974262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.974289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.974407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.974433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.974515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.974542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.974656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.974696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.974837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.974863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.975007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.975033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.975124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.975150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.975241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.975266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.975416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.975444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.564 [2024-11-17 18:56:18.975534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.564 [2024-11-17 18:56:18.975560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.564 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.975689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.975729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.975823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.975853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.975989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.976045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.976148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.976175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.976264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.976292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.976435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.976461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.976572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.976599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.976714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.976743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.976857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.976883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.976997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.977023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.977111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.977137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.977278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.977305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.977397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.977423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.977511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.977537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.977649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.977690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.977779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.977806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.977924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.977952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.978057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.978082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.978194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.978220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.978304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.978329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.978450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.978476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.978585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.978624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.978734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.978763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.978873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.978913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.979016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.979045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.979141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.979172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.979295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.979322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.979513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.979539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.979651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.979697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.979824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.979851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.979937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.565 [2024-11-17 18:56:18.979963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.565 qpair failed and we were unable to recover it. 00:35:32.565 [2024-11-17 18:56:18.980053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.980079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.980165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.980191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.980384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.980411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.980508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.980534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.980640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.980690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.980784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.980812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.980925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.980953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.981077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.981106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.981236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.981262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.981375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.981401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.981489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.981516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.981594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.981626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.981736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.981764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.981846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.981873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.981965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.981997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.982166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.982205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.982306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.982336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.982451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.982479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.982607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.982634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.982755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.982783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.982871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.982897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.983009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.983037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.983129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.983156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.983287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.983314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.983454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.983481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.983592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.983627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.983735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.983762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.983853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.983880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.983973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.983999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.984086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.984112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.984232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.984258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.984341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.984367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.984445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.984471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.984553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.566 [2024-11-17 18:56:18.984579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.566 qpair failed and we were unable to recover it. 00:35:32.566 [2024-11-17 18:56:18.984692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.984719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.984802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.984829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.984922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.984948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.985033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.985060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.985179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.985212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.985326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.985353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.985467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.985496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.985591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.985619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.985724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.985751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.985830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.985857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.985969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.986000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.986128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.986166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.986293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.986321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.986411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.986437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.986550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.986576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.986697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.986724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.986835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.986861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.986948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.986978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.987112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.987142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.987226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.987252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.987328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.987354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.987447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.987477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.987604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.987642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.987749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.987778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.987896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.987923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.988053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.988079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.988196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.988224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.988309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.988336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.988474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.988501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.567 qpair failed and we were unable to recover it. 00:35:32.567 [2024-11-17 18:56:18.988603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.567 [2024-11-17 18:56:18.988631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.988738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.988767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.988861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.988888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.988967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.988993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.989110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.989137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.989227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.989255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.989396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.989422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.989510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.989538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.989684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.989711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.989803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.989829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.989919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.989946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.990100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.990126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.990270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.990296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.990442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.990467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.990587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.990612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.990715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.990748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.990838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.990863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.990952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.990978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.991096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.991122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.991229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.991271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.991437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.991465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.991550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.991577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.991668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.991708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.991822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.991849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.991957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.991988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.992145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.992171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.992285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.992312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.992400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.992426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.992574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.992600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.992704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.992731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.992814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.992840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.992951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.992977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.993093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.993121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.993207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.993234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.993326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.568 [2024-11-17 18:56:18.993353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.568 qpair failed and we were unable to recover it. 00:35:32.568 [2024-11-17 18:56:18.993462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.993488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.993602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.993628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.993730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.993761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.993852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.993878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.993969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.993994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.994087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.994112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.994223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.994251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.994343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.994373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.994492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.994519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.994608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.994633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.994745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.994785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.994883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.994911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.995009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.995036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.995146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.995173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.995272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.995301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.995418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.995445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.995562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.995598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.995712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.995739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.995838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.995877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.995982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.996009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.996133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.996159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.996281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.996307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.996383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.996410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.996518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.996545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.996624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.996650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.996769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.996797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.996890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.996916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.997014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.997041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.997159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.997184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.997297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.997325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.997437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.997463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.997572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.997600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.997688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.997716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.997831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.997858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.997954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.997990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.998096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.569 [2024-11-17 18:56:18.998124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.569 qpair failed and we were unable to recover it. 00:35:32.569 [2024-11-17 18:56:18.998247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.998274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:18.998422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.998448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:18.998531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.998558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:18.998669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.998702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:18.998836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.998864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:18.998960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.998997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:18.999146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.999173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:18.999257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.999285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:18.999404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.999431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:18.999556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.999595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:18.999695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.999724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:18.999834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:18.999860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.000000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.000035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.000156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.000184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.000302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.000329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.000419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.000446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.000546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.000585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.000701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.000730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.000819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.000846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.000925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.000952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.001077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.001104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.001197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.001223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.001316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.001344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.001471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.001498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.001591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.001617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.001723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.001751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.001829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.001855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.001947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.001975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.002072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.002104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.002194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.002221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.002338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.002364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.002452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.002480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.002592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.570 [2024-11-17 18:56:19.002619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.570 qpair failed and we were unable to recover it. 00:35:32.570 [2024-11-17 18:56:19.002734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.002762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.002877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.002904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.002996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.003023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.003096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.003122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.003230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.003258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.003366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.003397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.003510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.003536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.003652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.003686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.003776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.003804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.003887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.003914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.004064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.004091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.004195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.004221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.004330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.004357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.004469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.004496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.004584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.004623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.004740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.004769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.004861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.004889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.004974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.005002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.005198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.005225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.005357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.005385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.005498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.005525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.005609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.005635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.005747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.005773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.005863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.005889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.005972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.005999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.006109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.006136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.006227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.006255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.006385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.006413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.006519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.006546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.006627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.006654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.006773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.006799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.006884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.571 [2024-11-17 18:56:19.006911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.571 qpair failed and we were unable to recover it. 00:35:32.571 [2024-11-17 18:56:19.006996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.007031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.007120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.007146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.007225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.007251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.007361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.007387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.007478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.007507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.007589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.007615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.007720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.007749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.007839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.007866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.007960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.007987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.008083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.008111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.008224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.008251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.008347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.008374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.008514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.008542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.008651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.008695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.008817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.008844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.008974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.009001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.009123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.009150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.009275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.009301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.009421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.009448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.009548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.009575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.009692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.009719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.009793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.009820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.009947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.009980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.010091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.010119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.010261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.010287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.010381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.010408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.010523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.010551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.010746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.010778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.010896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.010924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.011018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.011046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.011131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.011158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.011279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.572 [2024-11-17 18:56:19.011306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.572 qpair failed and we were unable to recover it. 00:35:32.572 [2024-11-17 18:56:19.011431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.011457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.011574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.011603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.011739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.011765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.011852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.011878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.011977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.012003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.012097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.012122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.012205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.012231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.012350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.012377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.012476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.012502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.012603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.012631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.012749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.012776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.012862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.012888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.012969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.013002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.013120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.013148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.013291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.013318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.013465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.013491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.013605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.013632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.013754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.013780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.013871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.013897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.013986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.014012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.014102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.014128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.014241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.014266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.014359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.014385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.014462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.014487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.014621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.014660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.014775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.014804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.014891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.014917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.015070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.015101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.015218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.015252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.015375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.015415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.015503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.015530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.015614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.015639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.015724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.015751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.015849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.015875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.573 [2024-11-17 18:56:19.015967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.573 [2024-11-17 18:56:19.015992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.573 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.016100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.016129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.016242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.016269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.016355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.016381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.016481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.016508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.016590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.016616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.016712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.016738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.016824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.016850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.016927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.016953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.017062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.017089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.017185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.017214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.017329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.017355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.017444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.017472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.017584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.017609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.017699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.017727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.017861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.017901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.018025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.018053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.018177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.018204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.018294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.018320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.018434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.018461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.018578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.018604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.018718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.018746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.018825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.018851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.018963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.018988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.019070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.019095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.019182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.019208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.019344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.019370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.019456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.019481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.019562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.019587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.019685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.019713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.019796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.019822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.019931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.019957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.020085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.020111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.574 [2024-11-17 18:56:19.020272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.574 [2024-11-17 18:56:19.020299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.574 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.020405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.020430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.020518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.020544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.020646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.020698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.020798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.020826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.020925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.020952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.021043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.021069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.021159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.021185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.021275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.021309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.021438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.021465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.021546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.021572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.021683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.021722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.021812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.021840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.021922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.021948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.022031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.022057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.022160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.022189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.022257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.575 [2024-11-17 18:56:19.022280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.022292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.575 [2024-11-17 18:56:19.022305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b9[2024-11-17 18:56:19.022307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the0 with addr=10.0.0.2, port=4420 00:35:32.575 only 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.022323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.575 [2024-11-17 18:56:19.022334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.575 [2024-11-17 18:56:19.022421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.022448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.022580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.022605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.022700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.022727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.022818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.022844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.022931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.022957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.023075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.023101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.023217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.023243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.023339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.023364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.023450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.023476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.023591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.023616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.023716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.023756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.023850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.023878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.023979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.024034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.023981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:32.575 [2024-11-17 18:56:19.024038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:32.575 [2024-11-17 18:56:19.024201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.024227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.024339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.024368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.575 [2024-11-17 18:56:19.024339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.024344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:32.575 [2024-11-17 18:56:19.024454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.024481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.024572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.575 [2024-11-17 18:56:19.024599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.575 qpair failed and we were unable to recover it. 00:35:32.575 [2024-11-17 18:56:19.024709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.024749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.024851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.024879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.024991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.025019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.025113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.025140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.025223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.025250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.025359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.025385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.025474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.025502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.025591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.025618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.025709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.025743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.025822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.025848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.025961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.025987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.026089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.026122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.026217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.026244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.026337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.026364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.026480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.026507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.026592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.026618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.026704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.026731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.026820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.026846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.026927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.026953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.027041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.027068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.027163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.027190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.027389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.027417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.027508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.027534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.027727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.027755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.027840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.027867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.027998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.028025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.028140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.028168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.028251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.028278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.028368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.028395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.028485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.028526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.028641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.028685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.028787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.028825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.028954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.028994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.029109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.029136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.576 qpair failed and we were unable to recover it. 00:35:32.576 [2024-11-17 18:56:19.029218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.576 [2024-11-17 18:56:19.029245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.029450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.029477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.029566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.029600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.029697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.029724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.029810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.029841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.029922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.029948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.030034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.030065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.030165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.030202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.030287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.030315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.030392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.030418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.030510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.030535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.030669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.030714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.030801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.030829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.030915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.030942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.031033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.031069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.031185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.031214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.031294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.031325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.031404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.031430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.031539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.031567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.031654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.031700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.031783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.031809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.031885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.031911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.031993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.032020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.032108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.032144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.032259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.032289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.032383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.032410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.032518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.032545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.032682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.032710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.032801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.032829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.032914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.032941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.033037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.033064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.577 [2024-11-17 18:56:19.033144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.577 [2024-11-17 18:56:19.033172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.577 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.033268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.033295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.033408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.033435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.033520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.033548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.033638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.033672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.033766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.033793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.033989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.034024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.034112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.034139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.034344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.034371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.034458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.034486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.034565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.034592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.034694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.034722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.034845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.034874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.034995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.035033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.035138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.035165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.035274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.035300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.035385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.035411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.035522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.035550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.035640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.035669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.035762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.035789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.035872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.035900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.035993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.036029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.036140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.036167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.036268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.036307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.036412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.036442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.036576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.036603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.036719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.036747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.036840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.036868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.036950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.036977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.037098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.037126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.037242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.037268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.037348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.037374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.037461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.037497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.037584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.037610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.037736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.037763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.037851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.037878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.037958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.578 [2024-11-17 18:56:19.037983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.578 qpair failed and we were unable to recover it. 00:35:32.578 [2024-11-17 18:56:19.038061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.038086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.038171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.038199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.038322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.038359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.038483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.038515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.038604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.038630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.038733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.038759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.038840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.038867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.038951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.038977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.039074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.039101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.039188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.039214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.039302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.039328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.039414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.039440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.039557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.039583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.039705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.039734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.039826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.039853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.039962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.039988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.040085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.040111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.040201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.040227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.040324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.040349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.040448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.040476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.040573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.040613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.040747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.040786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.040889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.040918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.041008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.041034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.041134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.041160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.041247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.041276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.041385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.041411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.041512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.041553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.041646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.041687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.041783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.041810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.041897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.041925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.042014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.042049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.042137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.042163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.042256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.042281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.042383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.042410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.579 [2024-11-17 18:56:19.042518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.579 [2024-11-17 18:56:19.042543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.579 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.042630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.042656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.042749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.042775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.042860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.042885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.042967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.042993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.043074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.043099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.043179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.043204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.043315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.043340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.043421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.043451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.043529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.043554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.043641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.043683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.043768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.043793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.043870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.043896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.043993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.044019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.044095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.044120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.044238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.044264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.044373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.044399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.044478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.044503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.044587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.044627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.044755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.044783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.044870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.044896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.044989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.045019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.045142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.045169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.045255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.045282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.045367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.045399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.045530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.045556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.045641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.045684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.045774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.045800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.045881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.045907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.046004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.046030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.046123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.046150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.046236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.046262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.046371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.046397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.046476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.046503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.046585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.046611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.046708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.046740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.046836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.046865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.046965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.047015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.580 [2024-11-17 18:56:19.047120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.580 [2024-11-17 18:56:19.047156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.580 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.047239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.047266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.047353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.047379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.047490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.047517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.047606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.047633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.047721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.047749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.047850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.047889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.048015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.048042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.048133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.048159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.048267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.048292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.048409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.048435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.048524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.048551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.048634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.048661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.048759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.048785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.048875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.048901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.049017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.049042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.049128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.049165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.049266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.049306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.049398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.049427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.049551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.049578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.049709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.049737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.049832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.049859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.049941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.049967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.050053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.050081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.050182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.050208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.050294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.050327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.050421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.050446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.050528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.050555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.050629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.050655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.050763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.050790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.050890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.050929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.051027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.051058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.051179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.051206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.051297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.051323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.051405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.051434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.051635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.051663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.051751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.051776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.051867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.051893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.581 qpair failed and we were unable to recover it. 00:35:32.581 [2024-11-17 18:56:19.051999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.581 [2024-11-17 18:56:19.052032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.052114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.052140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.052231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.052257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.052356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.052381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.052474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.052500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.052575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.052600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.052697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.052725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.052799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.052824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.052900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.052926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.053013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.053047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.053161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.053187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.053268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.053293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.053405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.053455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.053563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.053593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.053685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.053713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.053823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.053849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.053961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.053996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.054191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.054218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.054337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.054365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.054453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.054482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.054571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.054598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.054711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.054740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.054824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.054849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.054928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.054954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.055070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.055097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.055187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.055213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.055304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.055334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.055419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.055444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.055521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.055546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.055636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.055662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.055768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.055794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.582 [2024-11-17 18:56:19.055906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.582 [2024-11-17 18:56:19.055932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.582 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.056019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.056045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.056123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.056152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.056348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.056376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.056452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.056479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.056598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.056624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.056718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.056745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.056830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.056856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.056941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.056976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.057120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.057148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.057261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.057287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.057378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.057405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.057492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.057519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.057632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.057659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.057785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.057812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.057899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.057925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.058023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.058050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.058165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.058192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.058301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.058328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.058416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.058444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.058532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.058559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.058681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.058720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.058840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.058868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.058959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.058986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.059067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.059093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.059190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.059219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.059304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.059334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.059432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.059458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.059633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.059692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.059899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.059928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.060020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.060045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.060148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.060174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.060309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.060335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.060420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.060447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.060541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.060567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.060648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.060699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.060814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.060839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.060917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.060943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.061066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.061092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.061173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.061199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.061290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.061315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.583 [2024-11-17 18:56:19.061427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.583 [2024-11-17 18:56:19.061453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.583 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.061557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.061598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.061706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.061736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.061832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.061860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.061937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.061964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.062065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.062092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.062182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.062210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.062291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.062318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.062401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.062428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.062550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.062590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.062704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.062732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.062856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.062886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.062983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.063011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.063157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.063184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.063275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.063301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.063387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.063414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.063509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.063539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.063636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.063670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.063767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.063792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.063874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.063900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.063991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.064017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.064112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.064141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.064244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.064272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.064365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.064403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.064525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.064551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.064630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.064664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.064761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.064789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.064907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.064933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.065032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.065059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.065156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.065190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.065274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.065301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.065383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.065408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.065496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.065522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.065607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.065632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.065738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.065789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.065897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.065936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.066034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.066062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.066188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.066214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.066302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.066329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.066413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.066439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.066528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.066555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.584 qpair failed and we were unable to recover it. 00:35:32.584 [2024-11-17 18:56:19.066629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.584 [2024-11-17 18:56:19.066655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.066753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.066780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.066862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.066887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.066971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.066997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.067114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.067140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.067236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.067262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.067342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.067367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.067459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.067486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.067573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.067598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.067707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.067732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.067822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.067847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.067928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.067953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.068073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.068098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.068177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.068202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.068288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.068315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.068435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.068460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.068558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.068584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.068660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.068698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.068787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.068814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.068902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.068928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.069027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.069055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.069162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.069188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.069277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.069302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.069396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.069422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.069537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.069563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.069650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.069702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.069794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.069821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.069904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.069929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.070020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.070045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.070144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.070169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.070262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.070287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.070370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.070396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.070535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.070576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.070669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.070711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.070798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.070824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.070920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.070947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.071061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.071088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.071199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.071225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.071349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.071375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.585 [2024-11-17 18:56:19.071480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.585 [2024-11-17 18:56:19.071520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.585 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.071626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.071671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.071776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.071804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.071952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.071984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.072069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.072096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.072181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.072208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.072293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.072318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.072395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.072421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.072516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.072541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.072628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.072653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.072756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.072782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.072867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.072893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.073006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.073032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.073113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.073140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.073219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.073245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.073335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.073364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.073462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.073502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.073597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.073625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.073757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.073785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.073875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.073902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.074002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.074028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.074131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.074159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.074252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.074279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.074358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.074384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.074465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.074491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.074580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.074606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.074692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.074719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.074800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.074827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.074907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.074933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.075018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.075043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.075129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.075155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.075230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.075256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.075342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.075368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.075479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.586 [2024-11-17 18:56:19.075505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.586 qpair failed and we were unable to recover it. 00:35:32.586 [2024-11-17 18:56:19.075585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.075619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.075725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.075752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.075836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.075862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.075948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.075974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.076061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.076086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.076170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.076199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.076283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.076310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.076398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.076426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.076535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.076562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.076648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.076701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.076805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.076844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.076940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.076974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.077064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.077091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.077207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.077233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.077325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.077351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.077440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.077468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.077549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.077576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.077680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.077709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.077797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.077824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.077915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.077942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.078061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.078087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.078176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.078204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.078280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.078306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.078392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.078418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.078531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.078557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.078641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.078671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.078769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.078796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.078914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.078946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.079043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.079071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.079163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.079202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.079288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.079315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.079400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.079427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.079545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.079571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.079652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.079696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.079774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.079800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.079876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.079902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.080018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.587 [2024-11-17 18:56:19.080044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.587 qpair failed and we were unable to recover it. 00:35:32.587 [2024-11-17 18:56:19.080130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.080157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.080266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.080293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.080382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.080409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.080493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.080520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.080605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.080633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.080737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.080766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.080851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.080877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.081034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.081060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.081149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.081174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.081261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.081288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.081383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.081411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.081496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.081524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.081617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.081644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.081752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.081780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.081870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.081895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.082022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.082061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.082157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.082186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.082262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.082289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.082378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.082406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.082495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.082521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.082638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.082679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.082764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.082790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.082875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.082901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.083002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.083027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.083139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.083167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.083249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.083278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.083362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.083388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.083479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.083505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.083618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.083645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.083750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.083778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.083869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.083896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.084020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.084046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.084163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.084189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.084262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.084288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.084371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.084399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.084516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.084545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.084627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.084654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.084748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.084776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.588 qpair failed and we were unable to recover it. 00:35:32.588 [2024-11-17 18:56:19.084862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.588 [2024-11-17 18:56:19.084888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.085003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.085029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.085148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.085175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.085268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.085295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.085423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.085455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.085543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.085570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.085658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.085706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.085786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.085813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.085902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.085928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.086019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.086052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.086138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.086165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.086251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.086279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.086368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.086395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.086477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.086504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.086584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.086610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.086711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.086738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.086824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.086850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.086930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.086957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.087042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.087069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.087160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.087186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.087283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.087312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.087440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.087481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.087600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.087628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.087748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.087777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.087856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.087883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.087965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.088002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.088088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.088115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.088252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.088280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.088373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.088402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.088487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.088514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.088588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.088613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.088715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.088743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.088823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.088848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.088948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.088974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.089062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.589 [2024-11-17 18:56:19.089088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.589 qpair failed and we were unable to recover it. 00:35:32.589 [2024-11-17 18:56:19.089194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.590 [2024-11-17 18:56:19.089220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.590 qpair failed and we were unable to recover it. 00:35:32.590 [2024-11-17 18:56:19.089309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.590 [2024-11-17 18:56:19.089338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.590 qpair failed and we were unable to recover it. 00:35:32.590 [2024-11-17 18:56:19.089438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.590 [2024-11-17 18:56:19.089477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.590 qpair failed and we were unable to recover it. 00:35:32.590 [2024-11-17 18:56:19.089594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.590 [2024-11-17 18:56:19.089621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.590 qpair failed and we were unable to recover it. 00:35:32.590 [2024-11-17 18:56:19.089729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.590 [2024-11-17 18:56:19.089757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.590 qpair failed and we were unable to recover it. 00:35:32.590 [2024-11-17 18:56:19.089848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.590 [2024-11-17 18:56:19.089875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.590 qpair failed and we were unable to recover it. 00:35:32.590 [2024-11-17 18:56:19.090018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.590 [2024-11-17 18:56:19.090045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.590 qpair failed and we were unable to recover it. 00:35:32.590 [2024-11-17 18:56:19.090135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.590 [2024-11-17 18:56:19.090160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.590 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.090259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.090286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.090406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.090434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.090535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.090564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.090644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.090692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.090786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.090812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.090915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.090941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.091051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.091076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.091194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.091220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.091311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.091338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.091417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.091444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.091540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.091567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.091700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.091730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.091819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.091845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.091937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.091963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.092093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.092120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.092206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.092232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.092315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.855 [2024-11-17 18:56:19.092342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.855 qpair failed and we were unable to recover it. 00:35:32.855 [2024-11-17 18:56:19.092433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.092460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.092551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.092580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.092682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.092710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.092797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.092824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.092913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.092939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.093031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.093058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.093145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.093170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.093256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.093284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.093378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.093404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.093490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.093516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.093600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.093625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.093724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.093751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.093848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.093875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.093964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.093992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.094104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.094131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.094211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.094237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.094335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.094362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.094445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.094471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.094565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.094605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.094726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.094755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.094842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.094870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.094952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.094979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.095080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.095108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.095200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.095227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.095310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.095337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.095418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.095445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.095539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.095571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.095681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.095710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.095800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.095828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.095908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.095934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.096024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.096051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.096150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.096178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.096259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.096286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.096395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.096422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.096509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.096535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.096612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.096638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.096735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.096763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.096857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.096884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.096987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.097014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.097094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.097121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.097219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.097246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.097344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.097372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.097460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.097486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.097574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.097602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.097744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.097785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.097874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.097903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.098027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.098054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.098137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.098163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.098249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.098277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.098362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.098388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.098484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.098509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.098602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.098628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.856 [2024-11-17 18:56:19.098733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.856 [2024-11-17 18:56:19.098760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.856 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.098842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.098871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.098957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.098984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.099073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.099109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.099202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.099231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.099324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.099352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.099449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.099476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.099560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.099585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.099696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.099724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.099816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.099841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.099928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.099954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.100054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.100081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.100200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.100227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.100307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.100337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.100455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.100482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.100585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.100613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.100708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.100736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.100830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.100857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.100945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.100977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.101094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.101121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.101202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.101228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.101321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.101348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.101457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.101483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.101583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.101611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.101711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.101739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.101834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.101860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.101945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.101972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.102093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.102122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.102209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.102236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.102322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.102349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.102446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.102475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.102564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.102591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.102665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.102698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.102789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.102816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.102898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.102925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.103018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.103044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.103118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.103145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.103219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.103245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.103327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.103355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.103443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.103473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.103580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.103621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.103730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.103765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.103851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.103878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.103981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.104008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.104090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.104117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.104206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.104233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.104314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.104342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.104421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.104447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.104572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.104600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.104699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.104726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.104808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.104834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.104922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.104949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.105045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.105071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.105152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.105178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.857 qpair failed and we were unable to recover it. 00:35:32.857 [2024-11-17 18:56:19.105261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.857 [2024-11-17 18:56:19.105289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.105398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.105427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.105515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.105541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.105624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.105651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.105756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.105784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.105877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.105903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.105994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.106019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.106114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.106141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.106219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.106254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.106338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.106363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.106476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.106502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.106585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.106613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.106718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.106748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.106832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.106858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.106952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.106981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.107109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.107134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.107220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.107246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.107325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.107351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.107435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.107461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.107546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.107572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.107658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.107690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.107774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.107799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.107884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.107909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.108034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.108061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.108142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.108167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.108253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.108277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.108358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.108384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.108468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.108499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.108623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.108664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.108770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.108798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.108885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.108911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.109003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.109028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.109114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.109141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.109231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.109257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.109341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.109367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.109449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.109475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.109571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.109609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.109807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.109837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.109928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.109956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.110043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.110069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.110159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.110186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.110279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.110306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.110391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.110416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.110498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.110524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.110605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.110630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.110724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.110750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.110849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.110876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.110965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.110991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.111067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.111093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.111175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.111200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.111291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.111317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.111420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.111460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.111570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.111601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.111728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.111765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.111869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.111902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.112002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.112030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.112110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.112137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.112226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.112254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.112340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.112366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.112449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.112478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.112564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.112590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.112686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.112715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.112797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.112822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.858 [2024-11-17 18:56:19.112907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.858 [2024-11-17 18:56:19.112933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.858 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.113047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.113072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.113153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.113183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.113277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.113304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.113380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.113407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.113492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.113518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.113599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.113625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.113724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.113751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.113837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.113862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.113957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.113984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.114074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.114101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.114184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.114210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.114291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.114317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.114404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.114430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.114516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.114541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.114624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.114650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.114757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.114783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.114871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.114896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.114988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.115013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.115093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.115121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.115246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.115273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.115353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.115380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.115492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.115518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.115600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.115626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.115711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.115738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.115827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.115853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.115926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.115952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.116043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.116069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.116160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.116189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.116272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.116298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.116385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.116411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.116485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.116511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.116603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.116630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.116729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.116756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.116837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.116863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.116951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.116977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.117071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.117098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.117185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.117214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.117311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.117344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.117438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.117465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.117544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.117570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.117654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.117686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.117791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.117818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.117898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.117924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.118015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.118040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.118128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.118154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.118244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.118270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.118378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.118405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.118485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.118520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.118602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.118627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.118711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.118746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.118835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.118862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.118950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.118975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.119077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.119104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.119194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.119223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.119428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.119467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.119568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.119596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.119694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.119727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.119819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.119852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.120046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.120072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.120156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.120184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.120273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.120302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.120380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.120407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.120505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.120544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.120637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.859 [2024-11-17 18:56:19.120664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.859 qpair failed and we were unable to recover it. 00:35:32.859 [2024-11-17 18:56:19.120774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.120801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.120886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.120912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.121004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.121030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.121124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.121150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.121238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.121266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.121375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.121402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.121495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.121521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.121606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.121633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.121727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.121754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.121947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.121973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.122052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.122078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.122163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.122189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.122274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.122301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.122394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.122423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.122515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.122541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.122622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.122648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.122752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.122778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.122878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.122904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.123024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.123050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.123131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.123157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.123240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.123267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.123349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.123374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.123471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.123497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.123575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.123601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.123697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.123723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.123815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.123841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.123923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.123959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.124043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.124069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.124153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.124180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.124256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.124294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.124370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.124396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.124483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.124508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.124590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.124619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.124711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.124744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.124824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.124851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.124939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.124967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.125053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.125078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.125166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.125193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.125269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.125294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.125385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.125410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.125504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.125531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.125622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.125649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.125751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.125784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.125868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.125895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.125981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.126007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.126093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.126120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.126195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.126221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.126318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.126343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.126434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.126461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.126546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.126572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.126658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.126690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.126782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.126808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.126894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.126921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.127016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.127055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.127154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.860 [2024-11-17 18:56:19.127182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.860 qpair failed and we were unable to recover it. 00:35:32.860 [2024-11-17 18:56:19.127377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.127404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.127491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.127519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.127603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.127629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.127726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.127753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.127845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.127872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.127964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.127996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.128085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.128111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.128190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.128216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.128298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.128323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.128412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.128437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.128527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.128552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.128639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.128665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.128754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.128780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.128856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.128882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.128968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.128995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.129078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.129104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.129194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.129222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.129316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.129343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.129419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.129446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.129529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.129559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.129645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.129671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.129760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.129786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.129868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.129893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.129977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.130006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.130100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.130127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.130210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.130237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.130324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.130350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.130428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.130455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.130533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.130558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.130645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.130670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.130772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.130798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.130884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.130911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.130999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.131028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.131120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.131147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.131229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.131255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.131344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.131370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.131454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.131481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.131562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.131588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.131686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.131713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.131799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.131825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.131908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.131933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.132012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.132039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.132128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.132157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.132247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.132274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.132363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.132389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.132471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.132497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.132590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.132619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.132703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.132731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.132814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.132841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.132926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.132953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.133036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.133061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.861 [2024-11-17 18:56:19.133146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.861 [2024-11-17 18:56:19.133171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.861 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.133249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.133274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.133362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.133389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.133480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.133506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.133578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.133604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.133688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.133715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.133802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.133827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.133910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.133936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.134025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.134052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.134138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.134164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.134256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.134282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.134377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.134404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.134500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.134525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.134609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.134636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.134739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.134766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.134844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.134869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.134958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.134991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.135090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.135117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.135213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.135240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.135332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.135358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.135459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.135487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.135572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.135602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.135708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.135735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.135826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.135851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.135934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.135959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.136061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.136087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.136178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.136202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.136286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.136312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.136400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.136426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.136521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.136547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.136642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.136668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.136761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.136787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.136881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.136907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.136990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.137016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.137102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.137142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.137222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.137247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.137330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.137356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.137440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.137468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.137835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.137868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.137984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.138011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.138094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.138121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.138204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.138231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.138315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.138341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.138435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.138464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.138546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.138572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.138658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.138693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.138792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.138820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.138902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.138929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.139008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.139039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.139125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.139152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.139236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.139262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.139349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.139375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.139450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.139476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.139576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.139616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.139706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.139735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.139820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.139847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.139929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.862 [2024-11-17 18:56:19.139955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.862 qpair failed and we were unable to recover it. 00:35:32.862 [2024-11-17 18:56:19.140075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.140102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.140179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.140205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.140284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.140310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.140387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.140414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.140501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.140528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.140628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.140654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.140762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.140801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.140899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.140927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.141046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.141073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.141150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.141176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.141260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.141287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.141367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.141394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.141482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.141510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.141596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.141622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.141711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.141739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.141826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.141851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.141939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.141965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.142039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.142066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.142155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.142183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.142401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.142429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.142525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.142554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.142632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.142659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.142763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.142789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.142877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.142903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.142986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.143012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.143098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.143125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.863 [2024-11-17 18:56:19.143213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.143240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:32.863 [2024-11-17 18:56:19.143321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.143348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.143426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:32.863 [2024-11-17 18:56:19.143453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:32.863 [2024-11-17 18:56:19.143550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.143580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.863 [2024-11-17 18:56:19.143678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.143707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.143794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.143820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.143906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.143933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.144021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.144049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.144166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.144193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.144286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.144312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.144390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.144418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.144499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.144526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.144611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.144637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.144724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.144751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.144833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.144859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.144940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.144966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.145052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.145078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.145174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.145202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.145289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.145316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.145396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.145422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.145497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.145524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.145603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.145629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.145723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.145750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.145831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.145858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.145942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.863 [2024-11-17 18:56:19.145969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.863 qpair failed and we were unable to recover it. 00:35:32.863 [2024-11-17 18:56:19.146046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.146074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.146159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.146186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.146278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.146305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.146391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.146419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.146497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.146524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.146615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.146644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.146747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.146774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.146889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.146916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.146989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.147015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.147098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.147126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.147213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.147239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.147320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.147346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.147427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.147454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.147538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.147564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.147642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.147668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.147768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.147794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.147873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.147903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.148020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.148047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.148133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.148159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.148305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.148332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.148421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.148447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.148528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.148555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.148643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.148669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.148770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.148796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.148872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.148898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.148974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.149000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.149094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.149122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.149220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.149245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.149338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.149365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.149455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.149481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.149566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.149592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.149710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.149750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.149835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.149866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.149946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.149975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.150078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.150104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.150198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.150225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.150310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.150336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.150422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.150448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.150529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.150557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.150643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.150669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.150761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.150788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.150872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.150898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.150974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.151001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.151084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.151112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.151201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.151228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.151326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.151365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.151467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.151496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.151578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.864 [2024-11-17 18:56:19.151607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.864 qpair failed and we were unable to recover it. 00:35:32.864 [2024-11-17 18:56:19.151694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.151721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.151799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.151826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.151906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.151933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.152022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.152049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.152136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.152163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.152250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.152277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.152363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.152389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.152474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.152501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.152597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.152631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.152734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.152763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.152844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.152871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.152951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.152984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.153068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.153094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.153178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.153203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.153283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.153309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.153395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.153420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.153539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.153564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.153669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.153702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.153788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.153815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.153897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.153922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.154012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.154037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.154127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.154153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.154240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.154281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.154367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.154393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.154483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.154508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.154605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.154631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.154726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.154769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.154853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.154879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.154965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.154991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.155097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.155124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.155206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.155233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.155316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.155343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.155429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.155456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.155550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.155575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.155688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.155722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.155795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.155821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.155904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.155929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.156051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.156076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.156164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.156189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.156274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.156314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.156394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.156420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.156496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.156523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.156607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.156634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.156751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.156779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.156856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.156882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.156963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.156998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.157106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.157132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.157216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.157241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.157332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.157358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.157439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.157465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.157559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.157586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.157668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.157713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.157793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.157820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.157898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.157924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.158022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.158059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.158143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.158168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.158250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.158276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.158370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.158397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.158483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.158508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.158591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.158616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.158699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.158733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.158845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.865 [2024-11-17 18:56:19.158872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.865 qpair failed and we were unable to recover it. 00:35:32.865 [2024-11-17 18:56:19.158955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.158981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.159054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.159080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.159166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.159199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.159286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.159311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.159393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.159437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.159522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.159548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.159641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.159667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.159755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.159781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.159866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.159892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.159991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.160017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.160107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.160133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.160211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.160237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.160328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.160353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.160480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.160507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.160592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.160617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.160731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.160769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.160871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.160908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.161015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.161043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.161157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.161183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.161275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.161304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.161423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.161451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.161872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.161903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.162003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.162030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.162118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.162145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.162230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.162256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.162345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.162373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.162468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.162494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.162587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.162614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.162692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.162722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.162802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.162835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.162928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.162955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.163040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.163065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.163147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.163174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.163256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.163283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.163363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.163389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.163466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.163491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.163573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.163598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.163688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.163716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.163808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.163835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.163932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.163970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.164068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.164097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.164186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.164213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.164305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.164331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.164425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.164450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.164530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.164556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.164637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.164663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.164764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.164790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.164869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.164895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.164974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.165006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.165130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.165159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.165248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.165286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.165376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.165403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.165488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.165514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.165598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.866 [2024-11-17 18:56:19.165625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.866 qpair failed and we were unable to recover it. 00:35:32.866 [2024-11-17 18:56:19.165705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.165733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.165814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.165840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.165934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.165973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.166073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:32.867 [2024-11-17 18:56:19.166101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.166190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.166218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.166300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.166327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.166412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.166439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.166519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.166548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.166664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.166703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.867 [2024-11-17 18:56:19.166782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.166810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.166891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.166918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.867 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.167007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.167033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.167122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.167148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.167242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.167270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.167362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.167390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.167468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.167495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.167610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.167636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.167749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.167778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.167862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.167889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.167970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.167997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.168079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.168105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.168181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.168207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.168291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.168319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.168408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.168435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.168513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.168540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.168617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.168644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.168738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.168765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.168855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.168882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.168976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.169002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.169086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.169112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.169196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.169223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.169300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.169328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.169418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.169447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.169538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.169567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.169653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.169695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.169788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.169814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.169898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.169924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.170023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.170052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.170156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.170184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.170267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.170294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.170374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.170406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.170491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.170517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.170612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.170638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.170736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.170764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.170859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.170885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.170970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.171006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.171085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.171111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.171202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.171228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.171325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.171363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.171449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.867 [2024-11-17 18:56:19.171476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.867 qpair failed and we were unable to recover it. 00:35:32.867 [2024-11-17 18:56:19.171556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.171584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.171680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.171708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.171789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.171815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.171896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.171923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.172023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.172050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.172242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.172269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.172355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.172383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.172471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.172497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.172591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.172621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.172707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.172735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.172826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.172852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.172937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.172963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.173056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.173082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.173164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.173190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.173273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.173299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.173379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.173407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.173493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.173520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.173606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.173638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.173753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.173780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.173863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.173889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.173973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.173999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.174093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.174119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.174209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.174238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.174325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.174352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.174441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.174468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.174558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.174585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.174671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.174704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.174794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.174821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.175020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.175047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.175130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.175156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.175252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.175280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.175366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.175394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.175490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.175518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.175598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.175633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.175721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.175748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.175831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.175857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.175943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.175969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.176051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.176077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4dc0000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.176163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.176191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db8000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.176295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.176334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3f690 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.176430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.176457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.176544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.176572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.176662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.176696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.176784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.176811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.176949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.176975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 [2024-11-17 18:56:19.177087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.177113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4db4000b90 with addr=10.0.0.2, port=4420 00:35:32.868 qpair failed and we were unable to recover it. 00:35:32.868 A controller has encountered a failure and is being reset. 00:35:32.868 [2024-11-17 18:56:19.177234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.868 [2024-11-17 18:56:19.177272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa4d630 with addr=10.0.0.2, port=4420 00:35:32.868 [2024-11-17 18:56:19.177290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4d630 is same with the state(6) to be set 00:35:32.868 [2024-11-17 18:56:19.177316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4d630 (9): Bad file descriptor 00:35:32.868 [2024-11-17 18:56:19.177334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:35:32.868 [2024-11-17 18:56:19.177347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:35:32.868 [2024-11-17 18:56:19.177362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:35:32.868 Unable to reset the controller. 00:35:32.868 Malloc0 00:35:32.868 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.868 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:32.868 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.868 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.868 [2024-11-17 18:56:19.218368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.868 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.868 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:32.868 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.868 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.868 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.869 [2024-11-17 18:56:19.246635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.869 18:56:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 893502 00:35:33.801 Controller properly reset. 00:35:39.064 Initializing NVMe Controllers 00:35:39.064 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:39.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:39.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:39.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:39.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:39.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:39.064 Initialization complete. Launching workers. 00:35:39.064 Starting thread on core 1 00:35:39.064 Starting thread on core 2 00:35:39.064 Starting thread on core 3 00:35:39.064 Starting thread on core 0 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:39.064 00:35:39.064 real 0m10.650s 00:35:39.064 user 0m33.863s 00:35:39.064 sys 0m7.258s 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:39.064 ************************************ 00:35:39.064 END TEST nvmf_target_disconnect_tc2 00:35:39.064 ************************************ 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:39.064 rmmod nvme_tcp 00:35:39.064 rmmod nvme_fabrics 00:35:39.064 rmmod nvme_keyring 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 893909 ']' 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 893909 00:35:39.064 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 893909 ']' 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 893909 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 893909 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 893909' 00:35:39.065 killing process with pid 893909 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 893909 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 893909 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:39.065 18:56:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.967 18:56:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:40.967 00:35:40.967 real 0m15.503s 00:35:40.967 user 0m59.334s 00:35:40.967 sys 0m9.707s 00:35:40.967 18:56:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:40.967 18:56:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:40.967 ************************************ 00:35:40.967 END TEST nvmf_target_disconnect 00:35:40.967 ************************************ 00:35:40.967 18:56:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:40.967 00:35:40.967 real 6m40.478s 00:35:40.967 user 17m27.571s 00:35:40.967 sys 1m28.722s 00:35:40.967 18:56:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:40.967 18:56:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.967 ************************************ 00:35:40.967 END TEST nvmf_host 00:35:40.967 ************************************ 00:35:40.967 18:56:27 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:35:40.967 18:56:27 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:35:40.967 18:56:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:40.967 18:56:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:40.967 18:56:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:40.967 18:56:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:40.967 ************************************ 00:35:40.967 START TEST nvmf_target_core_interrupt_mode 00:35:40.967 ************************************ 00:35:40.967 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:41.226 * Looking for test storage... 00:35:41.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:35:41.226 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:41.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.227 --rc genhtml_branch_coverage=1 00:35:41.227 --rc genhtml_function_coverage=1 00:35:41.227 --rc genhtml_legend=1 00:35:41.227 --rc geninfo_all_blocks=1 00:35:41.227 --rc geninfo_unexecuted_blocks=1 00:35:41.227 00:35:41.227 ' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:41.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.227 --rc genhtml_branch_coverage=1 00:35:41.227 --rc genhtml_function_coverage=1 00:35:41.227 --rc genhtml_legend=1 00:35:41.227 --rc geninfo_all_blocks=1 00:35:41.227 --rc geninfo_unexecuted_blocks=1 00:35:41.227 00:35:41.227 ' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:41.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.227 --rc genhtml_branch_coverage=1 00:35:41.227 --rc genhtml_function_coverage=1 00:35:41.227 --rc genhtml_legend=1 00:35:41.227 --rc geninfo_all_blocks=1 00:35:41.227 --rc geninfo_unexecuted_blocks=1 00:35:41.227 00:35:41.227 ' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:41.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.227 --rc genhtml_branch_coverage=1 00:35:41.227 --rc genhtml_function_coverage=1 00:35:41.227 --rc genhtml_legend=1 00:35:41.227 --rc geninfo_all_blocks=1 00:35:41.227 --rc geninfo_unexecuted_blocks=1 00:35:41.227 00:35:41.227 ' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:41.227 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:35:41.228 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:35:41.228 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:35:41.228 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:41.228 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:41.228 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:41.228 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:41.228 ************************************ 00:35:41.228 START TEST nvmf_abort 00:35:41.228 ************************************ 00:35:41.228 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:41.228 * Looking for test storage... 00:35:41.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:41.228 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:41.228 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:35:41.228 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:41.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.489 --rc genhtml_branch_coverage=1 00:35:41.489 --rc genhtml_function_coverage=1 00:35:41.489 --rc genhtml_legend=1 00:35:41.489 --rc geninfo_all_blocks=1 00:35:41.489 --rc geninfo_unexecuted_blocks=1 00:35:41.489 00:35:41.489 ' 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:41.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.489 --rc genhtml_branch_coverage=1 00:35:41.489 --rc genhtml_function_coverage=1 00:35:41.489 --rc genhtml_legend=1 00:35:41.489 --rc geninfo_all_blocks=1 00:35:41.489 --rc geninfo_unexecuted_blocks=1 00:35:41.489 00:35:41.489 ' 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:41.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.489 --rc genhtml_branch_coverage=1 00:35:41.489 --rc genhtml_function_coverage=1 00:35:41.489 --rc genhtml_legend=1 00:35:41.489 --rc geninfo_all_blocks=1 00:35:41.489 --rc geninfo_unexecuted_blocks=1 00:35:41.489 00:35:41.489 ' 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:41.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.489 --rc genhtml_branch_coverage=1 00:35:41.489 --rc genhtml_function_coverage=1 00:35:41.489 --rc genhtml_legend=1 00:35:41.489 --rc geninfo_all_blocks=1 00:35:41.489 --rc geninfo_unexecuted_blocks=1 00:35:41.489 00:35:41.489 ' 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.489 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:35:41.490 18:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.085 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:44.085 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:35:44.085 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:44.085 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:44.086 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:44.086 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:44.086 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:44.086 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:44.086 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:44.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:44.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:35:44.087 00:35:44.087 --- 10.0.0.2 ping statistics --- 00:35:44.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.087 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:44.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:44.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:35:44.087 00:35:44.087 --- 10.0.0.1 ping statistics --- 00:35:44.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:44.087 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=896716 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 896716 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 896716 ']' 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.087 [2024-11-17 18:56:30.285262] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:44.087 [2024-11-17 18:56:30.286436] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:44.087 [2024-11-17 18:56:30.286502] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.087 [2024-11-17 18:56:30.363900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:44.087 [2024-11-17 18:56:30.413371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.087 [2024-11-17 18:56:30.413425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.087 [2024-11-17 18:56:30.413440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.087 [2024-11-17 18:56:30.413450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.087 [2024-11-17 18:56:30.413460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.087 [2024-11-17 18:56:30.415081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:44.087 [2024-11-17 18:56:30.415145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:44.087 [2024-11-17 18:56:30.415149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.087 [2024-11-17 18:56:30.504599] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:44.087 [2024-11-17 18:56:30.504827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:44.087 [2024-11-17 18:56:30.504837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:44.087 [2024-11-17 18:56:30.505104] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.087 [2024-11-17 18:56:30.559899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.087 Malloc0 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.087 Delay0 00:35:44.087 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.088 [2024-11-17 18:56:30.632094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.088 18:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:35:44.346 [2024-11-17 18:56:30.741571] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:46.887 Initializing NVMe Controllers 00:35:46.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:46.887 controller IO queue size 128 less than required 00:35:46.887 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:35:46.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:35:46.887 Initialization complete. Launching workers. 00:35:46.887 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28417 00:35:46.887 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28474, failed to submit 66 00:35:46.887 success 28417, unsuccessful 57, failed 0 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:46.887 rmmod nvme_tcp 00:35:46.887 rmmod nvme_fabrics 00:35:46.887 rmmod nvme_keyring 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 896716 ']' 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 896716 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 896716 ']' 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 896716 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.887 18:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 896716 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 896716' 00:35:46.887 killing process with pid 896716 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 896716 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 896716 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:46.887 18:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.794 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:48.794 00:35:48.794 real 0m7.547s 00:35:48.794 user 0m9.592s 00:35:48.794 sys 0m3.111s 00:35:48.794 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.794 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.794 ************************************ 00:35:48.794 END TEST nvmf_abort 00:35:48.794 ************************************ 00:35:48.794 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:48.794 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:48.794 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:48.794 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:48.794 ************************************ 00:35:48.794 START TEST nvmf_ns_hotplug_stress 00:35:48.794 ************************************ 00:35:48.794 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:49.054 * Looking for test storage... 00:35:49.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:49.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.054 --rc genhtml_branch_coverage=1 00:35:49.054 --rc genhtml_function_coverage=1 00:35:49.054 --rc genhtml_legend=1 00:35:49.054 --rc geninfo_all_blocks=1 00:35:49.054 --rc geninfo_unexecuted_blocks=1 00:35:49.054 00:35:49.054 ' 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:49.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.054 --rc genhtml_branch_coverage=1 00:35:49.054 --rc genhtml_function_coverage=1 00:35:49.054 --rc genhtml_legend=1 00:35:49.054 --rc geninfo_all_blocks=1 00:35:49.054 --rc geninfo_unexecuted_blocks=1 00:35:49.054 00:35:49.054 ' 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:49.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.054 --rc genhtml_branch_coverage=1 00:35:49.054 --rc genhtml_function_coverage=1 00:35:49.054 --rc genhtml_legend=1 00:35:49.054 --rc geninfo_all_blocks=1 00:35:49.054 --rc geninfo_unexecuted_blocks=1 00:35:49.054 00:35:49.054 ' 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:49.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.054 --rc genhtml_branch_coverage=1 00:35:49.054 --rc genhtml_function_coverage=1 00:35:49.054 --rc genhtml_legend=1 00:35:49.054 --rc geninfo_all_blocks=1 00:35:49.054 --rc geninfo_unexecuted_blocks=1 00:35:49.054 00:35:49.054 ' 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.054 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:35:49.055 18:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:51.585 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:51.586 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:51.586 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:51.586 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:51.586 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:51.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:51.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:35:51.586 00:35:51.586 --- 10.0.0.2 ping statistics --- 00:35:51.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.586 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:51.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:51.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:35:51.586 00:35:51.586 --- 10.0.0.1 ping statistics --- 00:35:51.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.586 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:51.586 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=899057 00:35:51.587 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:51.587 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 899057 00:35:51.587 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 899057 ']' 00:35:51.587 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.587 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.587 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.587 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.587 18:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:51.587 [2024-11-17 18:56:37.823808] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:51.587 [2024-11-17 18:56:37.824858] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:35:51.587 [2024-11-17 18:56:37.824932] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.587 [2024-11-17 18:56:37.899283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:51.587 [2024-11-17 18:56:37.945497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.587 [2024-11-17 18:56:37.945548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.587 [2024-11-17 18:56:37.945571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.587 [2024-11-17 18:56:37.945582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.587 [2024-11-17 18:56:37.945592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.587 [2024-11-17 18:56:37.947137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:51.587 [2024-11-17 18:56:37.947202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:51.587 [2024-11-17 18:56:37.947205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:51.587 [2024-11-17 18:56:38.029398] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:51.587 [2024-11-17 18:56:38.029596] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:51.587 [2024-11-17 18:56:38.029599] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:51.587 [2024-11-17 18:56:38.029883] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:51.587 18:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:51.587 18:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:35:51.587 18:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:51.587 18:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:51.587 18:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:51.587 18:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.587 18:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:35:51.587 18:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:51.844 [2024-11-17 18:56:38.339882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.845 18:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:52.103 18:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.363 [2024-11-17 18:56:38.892184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.363 18:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:52.622 18:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:35:53.187 Malloc0 00:35:53.187 18:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:53.445 Delay0 00:35:53.445 18:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:53.703 18:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:35:53.962 NULL1 00:35:53.962 18:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:35:54.220 18:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=899356 00:35:54.220 18:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:35:54.220 18:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:35:54.220 18:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:55.599 Read completed with error (sct=0, sc=11) 00:35:55.599 18:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:55.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:55.600 18:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:35:55.600 18:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:35:55.857 true 00:35:55.857 18:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:35:55.857 18:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:56.791 18:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:57.049 18:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:35:57.049 18:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:35:57.307 true 00:35:57.307 18:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:35:57.307 18:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:57.564 18:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:57.822 18:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:35:57.822 18:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:35:58.081 true 00:35:58.081 18:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:35:58.081 18:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:58.339 18:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:58.597 18:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:35:58.597 18:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:35:58.855 true 00:35:58.855 18:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:35:58.855 18:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:59.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:35:59.794 18:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:00.052 18:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:00.052 18:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:00.309 true 00:36:00.310 18:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:00.310 18:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:00.567 18:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:00.825 18:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:00.825 18:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:01.083 true 00:36:01.083 18:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:01.083 18:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:01.341 18:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:01.907 18:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:01.907 18:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:01.907 true 00:36:01.907 18:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:01.907 18:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:03.284 18:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:03.284 18:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:03.284 18:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:03.542 true 00:36:03.542 18:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:03.542 18:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:03.801 18:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:04.059 18:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:04.059 18:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:04.317 true 00:36:04.317 18:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:04.317 18:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:04.575 18:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:04.833 18:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:04.833 18:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:05.092 true 00:36:05.092 18:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:05.092 18:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:06.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:06.028 18:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:06.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:06.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:06.286 18:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:06.286 18:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:06.545 true 00:36:06.545 18:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:06.545 18:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:07.113 18:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:07.114 18:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:07.114 18:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:07.372 true 00:36:07.372 18:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:07.372 18:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:08.306 18:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:08.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:08.564 18:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:08.564 18:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:08.823 true 00:36:08.823 18:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:08.823 18:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:09.081 18:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:09.339 18:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:09.339 18:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:09.597 true 00:36:09.597 18:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:09.597 18:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:10.531 18:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:10.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:10.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:10.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:10.789 18:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:10.789 18:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:11.047 true 00:36:11.047 18:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:11.047 18:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:11.305 18:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:11.563 18:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:11.563 18:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:11.820 true 00:36:11.820 18:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:11.820 18:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:12.758 18:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:12.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:12.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:12.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:12.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:12.758 18:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:12.758 18:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:13.015 true 00:36:13.015 18:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:13.015 18:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:13.273 18:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:13.840 18:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:13.840 18:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:13.840 true 00:36:13.840 18:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:13.840 18:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:14.866 18:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:15.124 18:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:15.124 18:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:15.382 true 00:36:15.382 18:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:15.382 18:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:15.640 18:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:15.898 18:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:15.898 18:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:16.157 true 00:36:16.157 18:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:16.157 18:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:16.415 18:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.674 18:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:16.674 18:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:16.932 true 00:36:16.932 18:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:16.932 18:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:17.866 18:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.123 18:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:18.123 18:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:18.381 true 00:36:18.381 18:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:18.381 18:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.639 18:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.898 18:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:18.898 18:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:19.464 true 00:36:19.465 18:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:19.465 18:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.465 18:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.034 18:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:20.034 18:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:20.034 true 00:36:20.034 18:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:20.034 18:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.412 18:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:21.412 18:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:21.412 18:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:21.671 true 00:36:21.671 18:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:21.671 18:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.929 18:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.187 18:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:22.187 18:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:22.446 true 00:36:22.446 18:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:22.446 18:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.015 18:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.015 18:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:23.015 18:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:23.274 true 00:36:23.274 18:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:23.274 18:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.651 18:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.651 Initializing NVMe Controllers 00:36:24.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:24.651 Controller IO queue size 128, less than required. 00:36:24.651 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:24.651 Controller IO queue size 128, less than required. 00:36:24.651 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:24.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:24.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:24.651 Initialization complete. Launching workers. 00:36:24.651 ======================================================== 00:36:24.651 Latency(us) 00:36:24.651 Device Information : IOPS MiB/s Average min max 00:36:24.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 743.62 0.36 77734.86 2912.40 1071918.22 00:36:24.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9434.74 4.61 13568.46 1826.30 453896.58 00:36:24.651 ======================================================== 00:36:24.651 Total : 10178.35 4.97 18256.37 1826.30 1071918.22 00:36:24.651 00:36:24.651 18:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:24.651 18:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:24.909 true 00:36:24.909 18:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 899356 00:36:24.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (899356) - No such process 00:36:24.909 18:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 899356 00:36:24.909 18:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.167 18:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:25.426 18:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:25.426 18:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:25.426 18:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:25.426 18:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:25.426 18:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:25.684 null0 00:36:25.684 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:25.684 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:25.684 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:25.942 null1 00:36:25.942 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:25.942 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:25.942 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:26.200 null2 00:36:26.200 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.200 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.200 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:26.459 null3 00:36:26.459 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.459 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.459 18:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:26.719 null4 00:36:26.719 18:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.719 18:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.719 18:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:26.979 null5 00:36:26.979 18:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:26.979 18:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:26.979 18:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:27.238 null6 00:36:27.238 18:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:27.238 18:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:27.238 18:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:27.498 null7 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:27.498 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:27.499 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:27.499 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:27.499 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 903974 903975 903977 903979 903981 903983 903985 903987 00:36:27.499 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:27.499 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:28.065 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:28.065 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.065 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:28.065 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:28.065 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:28.065 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:28.065 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:28.065 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:28.323 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.323 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.323 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:28.323 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.323 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.323 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.324 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:28.582 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.582 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:28.582 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:28.582 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:28.582 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:28.582 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:28.582 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:28.582 18:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:28.841 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.842 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.842 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:28.842 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:28.842 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:28.842 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:29.100 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:29.100 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.100 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:29.100 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:29.100 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:29.100 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:29.100 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:29.100 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:29.359 18:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:29.617 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:29.617 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.617 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:29.617 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:29.617 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:29.617 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:29.617 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:29.617 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.184 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:30.441 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:30.441 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:30.441 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:30.441 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:30.441 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:30.441 18:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:30.699 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:30.957 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:30.957 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:30.957 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.957 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:30.957 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:30.957 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:30.957 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:30.957 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:31.216 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.216 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.216 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:31.216 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.216 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.216 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:31.216 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.216 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.217 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:31.475 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:31.475 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:31.475 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.475 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:31.475 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:31.475 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:31.475 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:31.475 18:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:31.733 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:31.991 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:31.991 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:31.991 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:31.991 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:31.991 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.991 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:31.991 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:32.250 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:32.508 18:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:32.766 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:32.766 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:32.766 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:32.766 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:32.766 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:32.766 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:32.766 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:32.766 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.023 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:33.281 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:33.281 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:33.281 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:33.281 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:33.281 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.281 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:33.281 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:33.281 18:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:33.539 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.539 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.539 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.539 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.539 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.539 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:33.540 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:33.540 rmmod nvme_tcp 00:36:33.540 rmmod nvme_fabrics 00:36:33.798 rmmod nvme_keyring 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 899057 ']' 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 899057 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 899057 ']' 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 899057 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 899057 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 899057' 00:36:33.798 killing process with pid 899057 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 899057 00:36:33.798 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 899057 00:36:34.056 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:34.056 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:34.056 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:34.056 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:34.056 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:36:34.056 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:34.056 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:36:34.056 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:34.056 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:34.057 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.057 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:34.057 18:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.957 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:35.957 00:36:35.957 real 0m47.110s 00:36:35.957 user 3m15.506s 00:36:35.957 sys 0m23.597s 00:36:35.957 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:35.957 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:35.957 ************************************ 00:36:35.957 END TEST nvmf_ns_hotplug_stress 00:36:35.957 ************************************ 00:36:35.957 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:35.957 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:35.957 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:35.957 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:35.957 ************************************ 00:36:35.957 START TEST nvmf_delete_subsystem 00:36:35.957 ************************************ 00:36:35.957 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:36.219 * Looking for test storage... 00:36:36.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:36.219 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:36.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.220 --rc genhtml_branch_coverage=1 00:36:36.220 --rc genhtml_function_coverage=1 00:36:36.220 --rc genhtml_legend=1 00:36:36.220 --rc geninfo_all_blocks=1 00:36:36.220 --rc geninfo_unexecuted_blocks=1 00:36:36.220 00:36:36.220 ' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:36.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.220 --rc genhtml_branch_coverage=1 00:36:36.220 --rc genhtml_function_coverage=1 00:36:36.220 --rc genhtml_legend=1 00:36:36.220 --rc geninfo_all_blocks=1 00:36:36.220 --rc geninfo_unexecuted_blocks=1 00:36:36.220 00:36:36.220 ' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:36.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.220 --rc genhtml_branch_coverage=1 00:36:36.220 --rc genhtml_function_coverage=1 00:36:36.220 --rc genhtml_legend=1 00:36:36.220 --rc geninfo_all_blocks=1 00:36:36.220 --rc geninfo_unexecuted_blocks=1 00:36:36.220 00:36:36.220 ' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:36.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.220 --rc genhtml_branch_coverage=1 00:36:36.220 --rc genhtml_function_coverage=1 00:36:36.220 --rc genhtml_legend=1 00:36:36.220 --rc geninfo_all_blocks=1 00:36:36.220 --rc geninfo_unexecuted_blocks=1 00:36:36.220 00:36:36.220 ' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:36.220 18:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:38.830 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:38.830 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:38.830 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:38.831 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:38.831 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:38.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:38.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:36:38.831 00:36:38.831 --- 10.0.0.2 ping statistics --- 00:36:38.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.831 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:38.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:38.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:36:38.831 00:36:38.831 --- 10.0.0.1 ping statistics --- 00:36:38.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.831 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=906846 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 906846 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 906846 ']' 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:38.831 18:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:38.831 [2024-11-17 18:57:25.023597] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:38.831 [2024-11-17 18:57:25.024722] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:38.831 [2024-11-17 18:57:25.024776] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:38.831 [2024-11-17 18:57:25.101610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:38.831 [2024-11-17 18:57:25.146075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:38.831 [2024-11-17 18:57:25.146128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:38.831 [2024-11-17 18:57:25.146151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:38.831 [2024-11-17 18:57:25.146162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:38.831 [2024-11-17 18:57:25.146171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:38.832 [2024-11-17 18:57:25.150693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.832 [2024-11-17 18:57:25.150704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.832 [2024-11-17 18:57:25.232239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:38.832 [2024-11-17 18:57:25.232257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:38.832 [2024-11-17 18:57:25.232497] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:38.832 [2024-11-17 18:57:25.287363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:38.832 [2024-11-17 18:57:25.303565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:38.832 NULL1 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:38.832 Delay0 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=906877 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:38.832 18:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:36:38.832 [2024-11-17 18:57:25.379400] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:41.357 18:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:41.357 18:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.357 18:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 starting I/O failed: -6 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 starting I/O failed: -6 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 starting I/O failed: -6 00:36:41.357 Write completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 starting I/O failed: -6 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 starting I/O failed: -6 00:36:41.357 Write completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 starting I/O failed: -6 00:36:41.357 Write completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 starting I/O failed: -6 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 starting I/O failed: -6 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.357 Read completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 [2024-11-17 18:57:27.458074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f34e400d350 is same with the state(6) to be set 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 starting I/O failed: -6 00:36:41.358 [2024-11-17 18:57:27.458752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2468f70 is same with the state(6) to be set 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 [2024-11-17 18:57:27.459175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f34e4000c40 is same with the state(6) to be set 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.358 Write completed with error (sct=0, sc=8) 00:36:41.358 Read completed with error (sct=0, sc=8) 00:36:41.923 [2024-11-17 18:57:28.435331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467190 is same with the state(6) to be set 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 [2024-11-17 18:57:28.462306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f34e400d020 is same with the state(6) to be set 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 [2024-11-17 18:57:28.462463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f34e400d680 is same with the state(6) to be set 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 [2024-11-17 18:57:28.463425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2469150 is same with the state(6) to be set 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Read completed with error (sct=0, sc=8) 00:36:41.923 Write completed with error (sct=0, sc=8) 00:36:41.923 [2024-11-17 18:57:28.463633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2469510 is same with the state(6) to be set 00:36:41.923 Initializing NVMe Controllers 00:36:41.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:41.923 Controller IO queue size 128, less than required. 00:36:41.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:41.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:41.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:41.923 Initialization complete. Launching workers. 00:36:41.923 ======================================================== 00:36:41.923 Latency(us) 00:36:41.923 Device Information : IOPS MiB/s Average min max 00:36:41.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.18 0.08 888445.71 577.68 1014199.84 00:36:41.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.35 0.07 942146.96 597.01 1045974.72 00:36:41.923 ======================================================== 00:36:41.923 Total : 324.53 0.16 913489.87 577.68 1045974.72 00:36:41.923 00:36:41.923 [2024-11-17 18:57:28.464478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2467190 (9): Bad file descriptor 00:36:41.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:36:41.923 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.923 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:36:41.923 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 906877 00:36:41.923 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 906877 00:36:42.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (906877) - No such process 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 906877 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 906877 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 906877 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:42.489 [2024-11-17 18:57:28.987532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=907270 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907270 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:42.489 18:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:42.489 [2024-11-17 18:57:29.052337] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:43.055 18:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:43.055 18:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907270 00:36:43.055 18:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:43.621 18:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:43.621 18:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907270 00:36:43.621 18:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:44.186 18:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:44.186 18:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907270 00:36:44.186 18:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:44.444 18:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:44.444 18:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907270 00:36:44.444 18:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:45.012 18:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:45.012 18:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907270 00:36:45.012 18:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:45.576 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:45.576 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907270 00:36:45.576 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:45.834 Initializing NVMe Controllers 00:36:45.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:45.834 Controller IO queue size 128, less than required. 00:36:45.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:45.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:45.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:45.834 Initialization complete. Launching workers. 00:36:45.834 ======================================================== 00:36:45.834 Latency(us) 00:36:45.834 Device Information : IOPS MiB/s Average min max 00:36:45.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004590.33 1000175.73 1041233.43 00:36:45.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005087.41 1000167.00 1042134.45 00:36:45.835 ======================================================== 00:36:45.835 Total : 256.00 0.12 1004838.87 1000167.00 1042134.45 00:36:45.835 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 907270 00:36:46.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (907270) - No such process 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 907270 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:46.093 rmmod nvme_tcp 00:36:46.093 rmmod nvme_fabrics 00:36:46.093 rmmod nvme_keyring 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 906846 ']' 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 906846 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 906846 ']' 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 906846 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 906846 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 906846' 00:36:46.093 killing process with pid 906846 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 906846 00:36:46.093 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 906846 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:46.351 18:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.885 18:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:48.885 00:36:48.885 real 0m12.367s 00:36:48.885 user 0m24.582s 00:36:48.885 sys 0m3.754s 00:36:48.885 18:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:48.885 18:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:48.885 ************************************ 00:36:48.885 END TEST nvmf_delete_subsystem 00:36:48.885 ************************************ 00:36:48.885 18:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:48.885 18:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:48.885 18:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:48.885 18:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:48.885 ************************************ 00:36:48.885 START TEST nvmf_host_management 00:36:48.885 ************************************ 00:36:48.885 18:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:48.885 * Looking for test storage... 00:36:48.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:48.885 18:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:48.885 18:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:36:48.885 18:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:48.885 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:48.885 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:48.885 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:48.885 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:48.885 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:36:48.885 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.886 --rc genhtml_branch_coverage=1 00:36:48.886 --rc genhtml_function_coverage=1 00:36:48.886 --rc genhtml_legend=1 00:36:48.886 --rc geninfo_all_blocks=1 00:36:48.886 --rc geninfo_unexecuted_blocks=1 00:36:48.886 00:36:48.886 ' 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.886 --rc genhtml_branch_coverage=1 00:36:48.886 --rc genhtml_function_coverage=1 00:36:48.886 --rc genhtml_legend=1 00:36:48.886 --rc geninfo_all_blocks=1 00:36:48.886 --rc geninfo_unexecuted_blocks=1 00:36:48.886 00:36:48.886 ' 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.886 --rc genhtml_branch_coverage=1 00:36:48.886 --rc genhtml_function_coverage=1 00:36:48.886 --rc genhtml_legend=1 00:36:48.886 --rc geninfo_all_blocks=1 00:36:48.886 --rc geninfo_unexecuted_blocks=1 00:36:48.886 00:36:48.886 ' 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.886 --rc genhtml_branch_coverage=1 00:36:48.886 --rc genhtml_function_coverage=1 00:36:48.886 --rc genhtml_legend=1 00:36:48.886 --rc geninfo_all_blocks=1 00:36:48.886 --rc geninfo_unexecuted_blocks=1 00:36:48.886 00:36:48.886 ' 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:48.886 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:36:48.887 18:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:50.790 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:50.791 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:50.791 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:50.791 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:50.791 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:50.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:50.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:36:50.791 00:36:50.791 --- 10.0.0.2 ping statistics --- 00:36:50.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.791 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:50.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:50.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:36:50.791 00:36:50.791 --- 10.0.0.1 ping statistics --- 00:36:50.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.791 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:36:50.791 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:50.792 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:50.792 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:50.792 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=909641 00:36:50.792 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 909641 00:36:50.792 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:36:50.792 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 909641 ']' 00:36:50.792 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.792 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:50.792 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.792 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:50.792 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:50.792 [2024-11-17 18:57:37.329776] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:50.792 [2024-11-17 18:57:37.330865] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:50.792 [2024-11-17 18:57:37.330940] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:51.051 [2024-11-17 18:57:37.405783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:51.051 [2024-11-17 18:57:37.453814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:51.051 [2024-11-17 18:57:37.453872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:51.051 [2024-11-17 18:57:37.453886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:51.051 [2024-11-17 18:57:37.453899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:51.051 [2024-11-17 18:57:37.453909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:51.051 [2024-11-17 18:57:37.455612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:51.051 [2024-11-17 18:57:37.455684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:51.051 [2024-11-17 18:57:37.455739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:51.051 [2024-11-17 18:57:37.455743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.051 [2024-11-17 18:57:37.545402] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:51.051 [2024-11-17 18:57:37.545608] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:51.051 [2024-11-17 18:57:37.545925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:51.051 [2024-11-17 18:57:37.546538] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:51.051 [2024-11-17 18:57:37.546784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.051 [2024-11-17 18:57:37.592463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.051 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.310 Malloc0 00:36:51.310 [2024-11-17 18:57:37.672622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=909773 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 909773 /var/tmp/bdevperf.sock 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 909773 ']' 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:51.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:51.310 { 00:36:51.310 "params": { 00:36:51.310 "name": "Nvme$subsystem", 00:36:51.310 "trtype": "$TEST_TRANSPORT", 00:36:51.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:51.310 "adrfam": "ipv4", 00:36:51.310 "trsvcid": "$NVMF_PORT", 00:36:51.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:51.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:51.310 "hdgst": ${hdgst:-false}, 00:36:51.310 "ddgst": ${ddgst:-false} 00:36:51.310 }, 00:36:51.310 "method": "bdev_nvme_attach_controller" 00:36:51.310 } 00:36:51.310 EOF 00:36:51.310 )") 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:51.310 18:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:51.310 "params": { 00:36:51.310 "name": "Nvme0", 00:36:51.310 "trtype": "tcp", 00:36:51.310 "traddr": "10.0.0.2", 00:36:51.310 "adrfam": "ipv4", 00:36:51.310 "trsvcid": "4420", 00:36:51.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:51.310 "hdgst": false, 00:36:51.310 "ddgst": false 00:36:51.310 }, 00:36:51.310 "method": "bdev_nvme_attach_controller" 00:36:51.310 }' 00:36:51.310 [2024-11-17 18:57:37.758480] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:51.310 [2024-11-17 18:57:37.758578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909773 ] 00:36:51.310 [2024-11-17 18:57:37.829573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.310 [2024-11-17 18:57:37.877313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.876 Running I/O for 10 seconds... 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:36:51.877 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:36:52.136 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=556 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 556 -ge 100 ']' 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:52.137 [2024-11-17 18:57:38.556474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fe70 is same with the state(6) to be set 00:36:52.137 [2024-11-17 18:57:38.556535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fe70 is same with the state(6) to be set 00:36:52.137 [2024-11-17 18:57:38.556553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fe70 is same with the state(6) to be set 00:36:52.137 [2024-11-17 18:57:38.556567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fe70 is same with the state(6) to be set 00:36:52.137 [2024-11-17 18:57:38.556579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fe70 is same with the state(6) to be set 00:36:52.137 [2024-11-17 18:57:38.556592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fe70 is same with the state(6) to be set 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.137 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:52.137 [2024-11-17 18:57:38.563558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.137 [2024-11-17 18:57:38.563601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.563619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.137 [2024-11-17 18:57:38.563634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.563648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.137 [2024-11-17 18:57:38.563672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.563697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.137 [2024-11-17 18:57:38.563711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.563735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a7970 is same with the state(6) to be set 00:36:52.137 [2024-11-17 18:57:38.566873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.566903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.566930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.566947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.566964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.566993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.137 [2024-11-17 18:57:38.567615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.137 [2024-11-17 18:57:38.567631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.567646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.567697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.567717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.567732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.567747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.567761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.567777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.567792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.567807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.567822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.567838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.567852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.567867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.567882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.567897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.567911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.567926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.567941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.567956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.567989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.138 [2024-11-17 18:57:38.568671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 [2024-11-17 18:57:38.568788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.138 18:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:36:52.138 [2024-11-17 18:57:38.568818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.138 [2024-11-17 18:57:38.568832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.139 [2024-11-17 18:57:38.568847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.139 [2024-11-17 18:57:38.568866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.139 [2024-11-17 18:57:38.568881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.139 [2024-11-17 18:57:38.568896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.139 [2024-11-17 18:57:38.568910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.139 [2024-11-17 18:57:38.568925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.139 [2024-11-17 18:57:38.570201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:52.139 task offset: 78976 on job bdev=Nvme0n1 fails 00:36:52.139 00:36:52.139 Latency(us) 00:36:52.139 [2024-11-17T17:57:38.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.139 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:52.139 Job: Nvme0n1 ended in about 0.40 seconds with error 00:36:52.139 Verification LBA range: start 0x0 length 0x400 00:36:52.139 Nvme0n1 : 0.40 1553.14 97.07 161.10 0.00 36253.42 2524.35 33593.27 00:36:52.139 [2024-11-17T17:57:38.715Z] =================================================================================================================== 00:36:52.139 [2024-11-17T17:57:38.715Z] Total : 1553.14 97.07 161.10 0.00 36253.42 2524.35 33593.27 00:36:52.139 [2024-11-17 18:57:38.572153] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:52.139 [2024-11-17 18:57:38.572181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a7970 (9): Bad file descriptor 00:36:52.139 [2024-11-17 18:57:38.663801] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:36:53.071 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 909773 00:36:53.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (909773) - No such process 00:36:53.071 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:36:53.071 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:36:53.072 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:36:53.072 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:36:53.072 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:53.072 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:53.072 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:53.072 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:53.072 { 00:36:53.072 "params": { 00:36:53.072 "name": "Nvme$subsystem", 00:36:53.072 "trtype": "$TEST_TRANSPORT", 00:36:53.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.072 "adrfam": "ipv4", 00:36:53.072 "trsvcid": "$NVMF_PORT", 00:36:53.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.072 "hdgst": ${hdgst:-false}, 00:36:53.072 "ddgst": ${ddgst:-false} 00:36:53.072 }, 00:36:53.072 "method": "bdev_nvme_attach_controller" 00:36:53.072 } 00:36:53.072 EOF 00:36:53.072 )") 00:36:53.072 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:53.072 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:53.072 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:53.072 18:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:53.072 "params": { 00:36:53.072 "name": "Nvme0", 00:36:53.072 "trtype": "tcp", 00:36:53.072 "traddr": "10.0.0.2", 00:36:53.072 "adrfam": "ipv4", 00:36:53.072 "trsvcid": "4420", 00:36:53.072 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.072 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.072 "hdgst": false, 00:36:53.072 "ddgst": false 00:36:53.072 }, 00:36:53.072 "method": "bdev_nvme_attach_controller" 00:36:53.072 }' 00:36:53.072 [2024-11-17 18:57:39.622604] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:53.072 [2024-11-17 18:57:39.622726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910001 ] 00:36:53.328 [2024-11-17 18:57:39.693820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.328 [2024-11-17 18:57:39.739616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:53.585 Running I/O for 1 seconds... 00:36:54.518 1664.00 IOPS, 104.00 MiB/s 00:36:54.519 Latency(us) 00:36:54.519 [2024-11-17T17:57:41.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.519 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:54.519 Verification LBA range: start 0x0 length 0x400 00:36:54.519 Nvme0n1 : 1.01 1707.82 106.74 0.00 0.00 36860.76 5412.79 33010.73 00:36:54.519 [2024-11-17T17:57:41.095Z] =================================================================================================================== 00:36:54.519 [2024-11-17T17:57:41.095Z] Total : 1707.82 106.74 0.00 0.00 36860.76 5412.79 33010.73 00:36:54.776 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:36:54.776 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:36:54.776 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:54.776 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:54.776 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:36:54.776 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:54.776 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:36:54.776 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:54.776 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:36:54.776 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:54.776 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:54.776 rmmod nvme_tcp 00:36:54.776 rmmod nvme_fabrics 00:36:54.776 rmmod nvme_keyring 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 909641 ']' 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 909641 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 909641 ']' 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 909641 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 909641 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 909641' 00:36:54.777 killing process with pid 909641 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 909641 00:36:54.777 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 909641 00:36:55.036 [2024-11-17 18:57:41.456333] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:55.036 18:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:36:57.570 00:36:57.570 real 0m8.630s 00:36:57.570 user 0m16.940s 00:36:57.570 sys 0m3.764s 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:57.570 ************************************ 00:36:57.570 END TEST nvmf_host_management 00:36:57.570 ************************************ 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:57.570 ************************************ 00:36:57.570 START TEST nvmf_lvol 00:36:57.570 ************************************ 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:57.570 * Looking for test storage... 00:36:57.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:57.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.570 --rc genhtml_branch_coverage=1 00:36:57.570 --rc genhtml_function_coverage=1 00:36:57.570 --rc genhtml_legend=1 00:36:57.570 --rc geninfo_all_blocks=1 00:36:57.570 --rc geninfo_unexecuted_blocks=1 00:36:57.570 00:36:57.570 ' 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:57.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.570 --rc genhtml_branch_coverage=1 00:36:57.570 --rc genhtml_function_coverage=1 00:36:57.570 --rc genhtml_legend=1 00:36:57.570 --rc geninfo_all_blocks=1 00:36:57.570 --rc geninfo_unexecuted_blocks=1 00:36:57.570 00:36:57.570 ' 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:57.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.570 --rc genhtml_branch_coverage=1 00:36:57.570 --rc genhtml_function_coverage=1 00:36:57.570 --rc genhtml_legend=1 00:36:57.570 --rc geninfo_all_blocks=1 00:36:57.570 --rc geninfo_unexecuted_blocks=1 00:36:57.570 00:36:57.570 ' 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:57.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.570 --rc genhtml_branch_coverage=1 00:36:57.570 --rc genhtml_function_coverage=1 00:36:57.570 --rc genhtml_legend=1 00:36:57.570 --rc geninfo_all_blocks=1 00:36:57.570 --rc geninfo_unexecuted_blocks=1 00:36:57.570 00:36:57.570 ' 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:57.570 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:36:57.571 18:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:59.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:59.473 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:59.473 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:59.473 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:59.473 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:59.474 18:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:59.474 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:59.474 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:59.474 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:59.474 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:59.732 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:59.732 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:59.732 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:59.732 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:59.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:59.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:36:59.732 00:36:59.732 --- 10.0.0.2 ping statistics --- 00:36:59.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:59.732 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:36:59.732 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:59.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:59.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:36:59.732 00:36:59.732 --- 10.0.0.1 ping statistics --- 00:36:59.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:59.732 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:36:59.732 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:59.732 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=912124 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 912124 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 912124 ']' 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:59.733 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:59.733 [2024-11-17 18:57:46.147491] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:59.733 [2024-11-17 18:57:46.148626] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:36:59.733 [2024-11-17 18:57:46.148714] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:59.733 [2024-11-17 18:57:46.220935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:59.733 [2024-11-17 18:57:46.263412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:59.733 [2024-11-17 18:57:46.263470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:59.733 [2024-11-17 18:57:46.263489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:59.733 [2024-11-17 18:57:46.263499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:59.733 [2024-11-17 18:57:46.263509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:59.733 [2024-11-17 18:57:46.264992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:59.733 [2024-11-17 18:57:46.265062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:59.733 [2024-11-17 18:57:46.265059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:59.991 [2024-11-17 18:57:46.346777] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:59.991 [2024-11-17 18:57:46.346978] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:59.991 [2024-11-17 18:57:46.346990] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:59.991 [2024-11-17 18:57:46.347255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:59.991 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:59.991 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:36:59.991 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:59.991 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:59.991 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:59.991 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:59.991 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:00.249 [2024-11-17 18:57:46.657725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:00.249 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:00.507 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:00.507 18:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:00.765 18:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:00.765 18:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:01.023 18:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:01.281 18:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=83243ba9-1a44-4230-8331-8341d53cbbe4 00:37:01.281 18:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 83243ba9-1a44-4230-8331-8341d53cbbe4 lvol 20 00:37:01.539 18:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=197acf68-04fd-4285-ad08-ac731d55b640 00:37:01.539 18:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:01.797 18:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 197acf68-04fd-4285-ad08-ac731d55b640 00:37:02.363 18:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:02.363 [2024-11-17 18:57:48.901877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.363 18:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:02.621 18:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=912548 00:37:02.621 18:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:02.621 18:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:03.992 18:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 197acf68-04fd-4285-ad08-ac731d55b640 MY_SNAPSHOT 00:37:03.992 18:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9e040594-3752-4ff3-8c72-47010f03ac5b 00:37:03.992 18:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 197acf68-04fd-4285-ad08-ac731d55b640 30 00:37:04.250 18:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9e040594-3752-4ff3-8c72-47010f03ac5b MY_CLONE 00:37:04.816 18:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a69b9650-4f84-4e54-a6ca-2948653ec754 00:37:04.816 18:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a69b9650-4f84-4e54-a6ca-2948653ec754 00:37:05.382 18:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 912548 00:37:13.519 Initializing NVMe Controllers 00:37:13.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:13.519 Controller IO queue size 128, less than required. 00:37:13.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:13.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:13.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:13.519 Initialization complete. Launching workers. 00:37:13.519 ======================================================== 00:37:13.519 Latency(us) 00:37:13.519 Device Information : IOPS MiB/s Average min max 00:37:13.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10359.40 40.47 12358.39 4795.65 73820.26 00:37:13.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10485.40 40.96 12210.98 5763.85 65749.14 00:37:13.519 ======================================================== 00:37:13.519 Total : 20844.80 81.42 12284.24 4795.65 73820.26 00:37:13.519 00:37:13.519 18:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:13.519 18:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 197acf68-04fd-4285-ad08-ac731d55b640 00:37:13.777 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 83243ba9-1a44-4230-8331-8341d53cbbe4 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:14.035 rmmod nvme_tcp 00:37:14.035 rmmod nvme_fabrics 00:37:14.035 rmmod nvme_keyring 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 912124 ']' 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 912124 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 912124 ']' 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 912124 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 912124 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 912124' 00:37:14.035 killing process with pid 912124 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 912124 00:37:14.035 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 912124 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:14.294 18:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:16.829 00:37:16.829 real 0m19.185s 00:37:16.829 user 0m56.571s 00:37:16.829 sys 0m7.608s 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:16.829 ************************************ 00:37:16.829 END TEST nvmf_lvol 00:37:16.829 ************************************ 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:16.829 ************************************ 00:37:16.829 START TEST nvmf_lvs_grow 00:37:16.829 ************************************ 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:16.829 * Looking for test storage... 00:37:16.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:16.829 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:16.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.830 --rc genhtml_branch_coverage=1 00:37:16.830 --rc genhtml_function_coverage=1 00:37:16.830 --rc genhtml_legend=1 00:37:16.830 --rc geninfo_all_blocks=1 00:37:16.830 --rc geninfo_unexecuted_blocks=1 00:37:16.830 00:37:16.830 ' 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:16.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.830 --rc genhtml_branch_coverage=1 00:37:16.830 --rc genhtml_function_coverage=1 00:37:16.830 --rc genhtml_legend=1 00:37:16.830 --rc geninfo_all_blocks=1 00:37:16.830 --rc geninfo_unexecuted_blocks=1 00:37:16.830 00:37:16.830 ' 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:16.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.830 --rc genhtml_branch_coverage=1 00:37:16.830 --rc genhtml_function_coverage=1 00:37:16.830 --rc genhtml_legend=1 00:37:16.830 --rc geninfo_all_blocks=1 00:37:16.830 --rc geninfo_unexecuted_blocks=1 00:37:16.830 00:37:16.830 ' 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:16.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.830 --rc genhtml_branch_coverage=1 00:37:16.830 --rc genhtml_function_coverage=1 00:37:16.830 --rc genhtml_legend=1 00:37:16.830 --rc geninfo_all_blocks=1 00:37:16.830 --rc geninfo_unexecuted_blocks=1 00:37:16.830 00:37:16.830 ' 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:16.830 18:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:18.735 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:18.736 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:18.736 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:18.736 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:18.736 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:18.736 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:18.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:18.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:37:18.737 00:37:18.737 --- 10.0.0.2 ping statistics --- 00:37:18.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.737 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:18.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:18.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:37:18.737 00:37:18.737 --- 10.0.0.1 ping statistics --- 00:37:18.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.737 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=915797 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 915797 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 915797 ']' 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:18.737 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:18.737 [2024-11-17 18:58:05.279971] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:18.737 [2024-11-17 18:58:05.281057] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:18.737 [2024-11-17 18:58:05.281123] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:18.996 [2024-11-17 18:58:05.352777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.996 [2024-11-17 18:58:05.394567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:18.996 [2024-11-17 18:58:05.394628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:18.996 [2024-11-17 18:58:05.394657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:18.996 [2024-11-17 18:58:05.394668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:18.996 [2024-11-17 18:58:05.394687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:18.996 [2024-11-17 18:58:05.395268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.996 [2024-11-17 18:58:05.473262] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:18.996 [2024-11-17 18:58:05.473570] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:18.996 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:18.996 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:37:18.996 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:18.996 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:18.996 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:18.996 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:18.996 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:19.255 [2024-11-17 18:58:05.767858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:19.255 ************************************ 00:37:19.255 START TEST lvs_grow_clean 00:37:19.255 ************************************ 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:19.255 18:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:19.822 18:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:19.822 18:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:19.822 18:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:19.822 18:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:19.822 18:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:20.389 18:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:20.389 18:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:20.389 18:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cdfd919f-77a2-4aac-bad3-887e9130ba0b lvol 150 00:37:20.389 18:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ffecd8eb-d4ed-4354-bab3-7ef6419a23d6 00:37:20.389 18:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:20.389 18:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:20.647 [2024-11-17 18:58:07.207732] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:20.647 [2024-11-17 18:58:07.207830] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:20.647 true 00:37:20.906 18:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:20.906 18:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:21.164 18:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:21.164 18:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:21.422 18:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ffecd8eb-d4ed-4354-bab3-7ef6419a23d6 00:37:21.680 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:21.939 [2024-11-17 18:58:08.316094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:21.939 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:22.197 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=916231 00:37:22.197 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:22.197 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:22.197 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 916231 /var/tmp/bdevperf.sock 00:37:22.197 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 916231 ']' 00:37:22.197 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:22.197 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:22.197 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:22.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:22.197 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:22.197 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:22.197 [2024-11-17 18:58:08.657194] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:22.197 [2024-11-17 18:58:08.657281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid916231 ] 00:37:22.197 [2024-11-17 18:58:08.724401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.197 [2024-11-17 18:58:08.770975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.454 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:22.454 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:37:22.454 18:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:22.712 Nvme0n1 00:37:22.712 18:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:22.970 [ 00:37:22.970 { 00:37:22.971 "name": "Nvme0n1", 00:37:22.971 "aliases": [ 00:37:22.971 "ffecd8eb-d4ed-4354-bab3-7ef6419a23d6" 00:37:22.971 ], 00:37:22.971 "product_name": "NVMe disk", 00:37:22.971 "block_size": 4096, 00:37:22.971 "num_blocks": 38912, 00:37:22.971 "uuid": "ffecd8eb-d4ed-4354-bab3-7ef6419a23d6", 00:37:22.971 "numa_id": 0, 00:37:22.971 "assigned_rate_limits": { 00:37:22.971 "rw_ios_per_sec": 0, 00:37:22.971 "rw_mbytes_per_sec": 0, 00:37:22.971 "r_mbytes_per_sec": 0, 00:37:22.971 "w_mbytes_per_sec": 0 00:37:22.971 }, 00:37:22.971 "claimed": false, 00:37:22.971 "zoned": false, 00:37:22.971 "supported_io_types": { 00:37:22.971 "read": true, 00:37:22.971 "write": true, 00:37:22.971 "unmap": true, 00:37:22.971 "flush": true, 00:37:22.971 "reset": true, 00:37:22.971 "nvme_admin": true, 00:37:22.971 "nvme_io": true, 00:37:22.971 "nvme_io_md": false, 00:37:22.971 "write_zeroes": true, 00:37:22.971 "zcopy": false, 00:37:22.971 "get_zone_info": false, 00:37:22.971 "zone_management": false, 00:37:22.971 "zone_append": false, 00:37:22.971 "compare": true, 00:37:22.971 "compare_and_write": true, 00:37:22.971 "abort": true, 00:37:22.971 "seek_hole": false, 00:37:22.971 "seek_data": false, 00:37:22.971 "copy": true, 00:37:22.971 "nvme_iov_md": false 00:37:22.971 }, 00:37:22.971 "memory_domains": [ 00:37:22.971 { 00:37:22.971 "dma_device_id": "system", 00:37:22.971 "dma_device_type": 1 00:37:22.971 } 00:37:22.971 ], 00:37:22.971 "driver_specific": { 00:37:22.971 "nvme": [ 00:37:22.971 { 00:37:22.971 "trid": { 00:37:22.971 "trtype": "TCP", 00:37:22.971 "adrfam": "IPv4", 00:37:22.971 "traddr": "10.0.0.2", 00:37:22.971 "trsvcid": "4420", 00:37:22.971 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:22.971 }, 00:37:22.971 "ctrlr_data": { 00:37:22.971 "cntlid": 1, 00:37:22.971 "vendor_id": "0x8086", 00:37:22.971 "model_number": "SPDK bdev Controller", 00:37:22.971 "serial_number": "SPDK0", 00:37:22.971 "firmware_revision": "25.01", 00:37:22.971 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.971 "oacs": { 00:37:22.971 "security": 0, 00:37:22.971 "format": 0, 00:37:22.971 "firmware": 0, 00:37:22.971 "ns_manage": 0 00:37:22.971 }, 00:37:22.971 "multi_ctrlr": true, 00:37:22.971 "ana_reporting": false 00:37:22.971 }, 00:37:22.971 "vs": { 00:37:22.971 "nvme_version": "1.3" 00:37:22.971 }, 00:37:22.971 "ns_data": { 00:37:22.971 "id": 1, 00:37:22.971 "can_share": true 00:37:22.971 } 00:37:22.971 } 00:37:22.971 ], 00:37:22.971 "mp_policy": "active_passive" 00:37:22.971 } 00:37:22.971 } 00:37:22.971 ] 00:37:22.971 18:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=916361 00:37:22.971 18:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:22.971 18:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:23.229 Running I/O for 10 seconds... 00:37:24.165 Latency(us) 00:37:24.165 [2024-11-17T17:58:10.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:24.165 Nvme0n1 : 1.00 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:37:24.165 [2024-11-17T17:58:10.741Z] =================================================================================================================== 00:37:24.165 [2024-11-17T17:58:10.741Z] Total : 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:37:24.165 00:37:25.100 18:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:25.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:25.100 Nvme0n1 : 2.00 14994.50 58.57 0.00 0.00 0.00 0.00 0.00 00:37:25.100 [2024-11-17T17:58:11.676Z] =================================================================================================================== 00:37:25.100 [2024-11-17T17:58:11.676Z] Total : 14994.50 58.57 0.00 0.00 0.00 0.00 0.00 00:37:25.100 00:37:25.359 true 00:37:25.359 18:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:25.359 18:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:25.618 18:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:25.618 18:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:25.618 18:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 916361 00:37:26.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:26.184 Nvme0n1 : 3.00 15130.00 59.10 0.00 0.00 0.00 0.00 0.00 00:37:26.184 [2024-11-17T17:58:12.760Z] =================================================================================================================== 00:37:26.184 [2024-11-17T17:58:12.760Z] Total : 15130.00 59.10 0.00 0.00 0.00 0.00 0.00 00:37:26.184 00:37:27.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:27.120 Nvme0n1 : 4.00 15252.75 59.58 0.00 0.00 0.00 0.00 0.00 00:37:27.120 [2024-11-17T17:58:13.696Z] =================================================================================================================== 00:37:27.120 [2024-11-17T17:58:13.696Z] Total : 15252.75 59.58 0.00 0.00 0.00 0.00 0.00 00:37:27.120 00:37:28.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:28.056 Nvme0n1 : 5.00 15326.40 59.87 0.00 0.00 0.00 0.00 0.00 00:37:28.056 [2024-11-17T17:58:14.632Z] =================================================================================================================== 00:37:28.056 [2024-11-17T17:58:14.632Z] Total : 15326.40 59.87 0.00 0.00 0.00 0.00 0.00 00:37:28.056 00:37:29.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:29.431 Nvme0n1 : 6.00 15396.67 60.14 0.00 0.00 0.00 0.00 0.00 00:37:29.431 [2024-11-17T17:58:16.007Z] =================================================================================================================== 00:37:29.431 [2024-11-17T17:58:16.007Z] Total : 15396.67 60.14 0.00 0.00 0.00 0.00 0.00 00:37:29.431 00:37:30.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:30.366 Nvme0n1 : 7.00 15446.86 60.34 0.00 0.00 0.00 0.00 0.00 00:37:30.366 [2024-11-17T17:58:16.942Z] =================================================================================================================== 00:37:30.366 [2024-11-17T17:58:16.942Z] Total : 15446.86 60.34 0.00 0.00 0.00 0.00 0.00 00:37:30.366 00:37:31.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:31.300 Nvme0n1 : 8.00 15484.50 60.49 0.00 0.00 0.00 0.00 0.00 00:37:31.300 [2024-11-17T17:58:17.876Z] =================================================================================================================== 00:37:31.300 [2024-11-17T17:58:17.876Z] Total : 15484.50 60.49 0.00 0.00 0.00 0.00 0.00 00:37:31.300 00:37:32.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:32.232 Nvme0n1 : 9.00 15517.56 60.62 0.00 0.00 0.00 0.00 0.00 00:37:32.232 [2024-11-17T17:58:18.808Z] =================================================================================================================== 00:37:32.232 [2024-11-17T17:58:18.808Z] Total : 15517.56 60.62 0.00 0.00 0.00 0.00 0.00 00:37:32.232 00:37:33.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:33.173 Nvme0n1 : 10.00 15553.30 60.76 0.00 0.00 0.00 0.00 0.00 00:37:33.173 [2024-11-17T17:58:19.749Z] =================================================================================================================== 00:37:33.173 [2024-11-17T17:58:19.749Z] Total : 15553.30 60.76 0.00 0.00 0.00 0.00 0.00 00:37:33.173 00:37:33.173 00:37:33.173 Latency(us) 00:37:33.173 [2024-11-17T17:58:19.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:33.173 Nvme0n1 : 10.01 15553.10 60.75 0.00 0.00 8225.24 4247.70 21068.61 00:37:33.173 [2024-11-17T17:58:19.749Z] =================================================================================================================== 00:37:33.173 [2024-11-17T17:58:19.749Z] Total : 15553.10 60.75 0.00 0.00 8225.24 4247.70 21068.61 00:37:33.173 { 00:37:33.173 "results": [ 00:37:33.173 { 00:37:33.173 "job": "Nvme0n1", 00:37:33.173 "core_mask": "0x2", 00:37:33.173 "workload": "randwrite", 00:37:33.173 "status": "finished", 00:37:33.173 "queue_depth": 128, 00:37:33.173 "io_size": 4096, 00:37:33.173 "runtime": 10.00836, 00:37:33.173 "iops": 15553.097610397708, 00:37:33.173 "mibps": 60.75428754061605, 00:37:33.173 "io_failed": 0, 00:37:33.173 "io_timeout": 0, 00:37:33.173 "avg_latency_us": 8225.243617195678, 00:37:33.173 "min_latency_us": 4247.7037037037035, 00:37:33.173 "max_latency_us": 21068.61037037037 00:37:33.173 } 00:37:33.173 ], 00:37:33.173 "core_count": 1 00:37:33.173 } 00:37:33.173 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 916231 00:37:33.173 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 916231 ']' 00:37:33.173 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 916231 00:37:33.173 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:37:33.173 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:33.173 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 916231 00:37:33.173 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:33.173 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:33.173 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 916231' 00:37:33.173 killing process with pid 916231 00:37:33.173 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 916231 00:37:33.173 Received shutdown signal, test time was about 10.000000 seconds 00:37:33.173 00:37:33.173 Latency(us) 00:37:33.173 [2024-11-17T17:58:19.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.173 [2024-11-17T17:58:19.749Z] =================================================================================================================== 00:37:33.173 [2024-11-17T17:58:19.749Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:33.173 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 916231 00:37:33.433 18:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:33.693 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:33.952 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:33.952 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:34.210 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:34.210 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:34.210 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:34.468 [2024-11-17 18:58:20.947808] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:34.468 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:34.468 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:37:34.469 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:34.469 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:34.469 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:34.469 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:34.469 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:34.469 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:34.469 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:34.469 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:34.469 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:34.469 18:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:34.727 request: 00:37:34.727 { 00:37:34.727 "uuid": "cdfd919f-77a2-4aac-bad3-887e9130ba0b", 00:37:34.727 "method": "bdev_lvol_get_lvstores", 00:37:34.727 "req_id": 1 00:37:34.727 } 00:37:34.727 Got JSON-RPC error response 00:37:34.727 response: 00:37:34.727 { 00:37:34.727 "code": -19, 00:37:34.727 "message": "No such device" 00:37:34.727 } 00:37:34.727 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:37:34.727 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:34.727 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:34.727 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:34.727 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:34.986 aio_bdev 00:37:34.986 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ffecd8eb-d4ed-4354-bab3-7ef6419a23d6 00:37:34.986 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ffecd8eb-d4ed-4354-bab3-7ef6419a23d6 00:37:34.986 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:34.986 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:37:34.986 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:34.986 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:34.986 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:35.244 18:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ffecd8eb-d4ed-4354-bab3-7ef6419a23d6 -t 2000 00:37:35.810 [ 00:37:35.810 { 00:37:35.810 "name": "ffecd8eb-d4ed-4354-bab3-7ef6419a23d6", 00:37:35.810 "aliases": [ 00:37:35.810 "lvs/lvol" 00:37:35.810 ], 00:37:35.810 "product_name": "Logical Volume", 00:37:35.811 "block_size": 4096, 00:37:35.811 "num_blocks": 38912, 00:37:35.811 "uuid": "ffecd8eb-d4ed-4354-bab3-7ef6419a23d6", 00:37:35.811 "assigned_rate_limits": { 00:37:35.811 "rw_ios_per_sec": 0, 00:37:35.811 "rw_mbytes_per_sec": 0, 00:37:35.811 "r_mbytes_per_sec": 0, 00:37:35.811 "w_mbytes_per_sec": 0 00:37:35.811 }, 00:37:35.811 "claimed": false, 00:37:35.811 "zoned": false, 00:37:35.811 "supported_io_types": { 00:37:35.811 "read": true, 00:37:35.811 "write": true, 00:37:35.811 "unmap": true, 00:37:35.811 "flush": false, 00:37:35.811 "reset": true, 00:37:35.811 "nvme_admin": false, 00:37:35.811 "nvme_io": false, 00:37:35.811 "nvme_io_md": false, 00:37:35.811 "write_zeroes": true, 00:37:35.811 "zcopy": false, 00:37:35.811 "get_zone_info": false, 00:37:35.811 "zone_management": false, 00:37:35.811 "zone_append": false, 00:37:35.811 "compare": false, 00:37:35.811 "compare_and_write": false, 00:37:35.811 "abort": false, 00:37:35.811 "seek_hole": true, 00:37:35.811 "seek_data": true, 00:37:35.811 "copy": false, 00:37:35.811 "nvme_iov_md": false 00:37:35.811 }, 00:37:35.811 "driver_specific": { 00:37:35.811 "lvol": { 00:37:35.811 "lvol_store_uuid": "cdfd919f-77a2-4aac-bad3-887e9130ba0b", 00:37:35.811 "base_bdev": "aio_bdev", 00:37:35.811 "thin_provision": false, 00:37:35.811 "num_allocated_clusters": 38, 00:37:35.811 "snapshot": false, 00:37:35.811 "clone": false, 00:37:35.811 "esnap_clone": false 00:37:35.811 } 00:37:35.811 } 00:37:35.811 } 00:37:35.811 ] 00:37:35.811 18:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:37:35.811 18:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:35.811 18:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:36.069 18:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:36.069 18:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:36.069 18:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:36.327 18:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:36.327 18:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ffecd8eb-d4ed-4354-bab3-7ef6419a23d6 00:37:36.585 18:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cdfd919f-77a2-4aac-bad3-887e9130ba0b 00:37:36.844 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:37.102 00:37:37.102 real 0m17.727s 00:37:37.102 user 0m17.287s 00:37:37.102 sys 0m1.835s 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:37.102 ************************************ 00:37:37.102 END TEST lvs_grow_clean 00:37:37.102 ************************************ 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:37.102 ************************************ 00:37:37.102 START TEST lvs_grow_dirty 00:37:37.102 ************************************ 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:37.102 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:37.362 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:37.362 18:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:37.652 18:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:37.652 18:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:37.652 18:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:37.935 18:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:37.935 18:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:37.935 18:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c6bc26e3-963e-47af-8524-2bcbf2e27031 lvol 150 00:37:38.193 18:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0bc036b0-21f6-4569-a3f3-a5820273edcc 00:37:38.193 18:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:38.193 18:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:38.453 [2024-11-17 18:58:25.007746] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:38.453 [2024-11-17 18:58:25.007850] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:38.453 true 00:37:38.453 18:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:38.453 18:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:39.024 18:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:39.024 18:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:39.024 18:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0bc036b0-21f6-4569-a3f3-a5820273edcc 00:37:39.285 18:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:39.546 [2024-11-17 18:58:26.104080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.546 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:40.114 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=918386 00:37:40.114 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:40.114 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:40.114 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 918386 /var/tmp/bdevperf.sock 00:37:40.114 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 918386 ']' 00:37:40.114 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:40.114 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:40.114 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:40.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:40.114 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:40.114 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:40.114 [2024-11-17 18:58:26.444247] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:40.114 [2024-11-17 18:58:26.444335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid918386 ] 00:37:40.114 [2024-11-17 18:58:26.512439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.114 [2024-11-17 18:58:26.559862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.373 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:40.373 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:40.373 18:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:40.633 Nvme0n1 00:37:40.633 18:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:40.892 [ 00:37:40.892 { 00:37:40.892 "name": "Nvme0n1", 00:37:40.892 "aliases": [ 00:37:40.892 "0bc036b0-21f6-4569-a3f3-a5820273edcc" 00:37:40.892 ], 00:37:40.892 "product_name": "NVMe disk", 00:37:40.892 "block_size": 4096, 00:37:40.892 "num_blocks": 38912, 00:37:40.892 "uuid": "0bc036b0-21f6-4569-a3f3-a5820273edcc", 00:37:40.892 "numa_id": 0, 00:37:40.892 "assigned_rate_limits": { 00:37:40.892 "rw_ios_per_sec": 0, 00:37:40.892 "rw_mbytes_per_sec": 0, 00:37:40.892 "r_mbytes_per_sec": 0, 00:37:40.892 "w_mbytes_per_sec": 0 00:37:40.892 }, 00:37:40.892 "claimed": false, 00:37:40.892 "zoned": false, 00:37:40.892 "supported_io_types": { 00:37:40.892 "read": true, 00:37:40.892 "write": true, 00:37:40.892 "unmap": true, 00:37:40.892 "flush": true, 00:37:40.892 "reset": true, 00:37:40.892 "nvme_admin": true, 00:37:40.892 "nvme_io": true, 00:37:40.892 "nvme_io_md": false, 00:37:40.892 "write_zeroes": true, 00:37:40.892 "zcopy": false, 00:37:40.892 "get_zone_info": false, 00:37:40.892 "zone_management": false, 00:37:40.892 "zone_append": false, 00:37:40.892 "compare": true, 00:37:40.892 "compare_and_write": true, 00:37:40.892 "abort": true, 00:37:40.892 "seek_hole": false, 00:37:40.892 "seek_data": false, 00:37:40.892 "copy": true, 00:37:40.892 "nvme_iov_md": false 00:37:40.892 }, 00:37:40.892 "memory_domains": [ 00:37:40.892 { 00:37:40.892 "dma_device_id": "system", 00:37:40.892 "dma_device_type": 1 00:37:40.892 } 00:37:40.892 ], 00:37:40.892 "driver_specific": { 00:37:40.892 "nvme": [ 00:37:40.892 { 00:37:40.892 "trid": { 00:37:40.892 "trtype": "TCP", 00:37:40.892 "adrfam": "IPv4", 00:37:40.892 "traddr": "10.0.0.2", 00:37:40.892 "trsvcid": "4420", 00:37:40.892 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:40.892 }, 00:37:40.892 "ctrlr_data": { 00:37:40.892 "cntlid": 1, 00:37:40.892 "vendor_id": "0x8086", 00:37:40.892 "model_number": "SPDK bdev Controller", 00:37:40.892 "serial_number": "SPDK0", 00:37:40.892 "firmware_revision": "25.01", 00:37:40.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:40.892 "oacs": { 00:37:40.892 "security": 0, 00:37:40.892 "format": 0, 00:37:40.892 "firmware": 0, 00:37:40.892 "ns_manage": 0 00:37:40.892 }, 00:37:40.892 "multi_ctrlr": true, 00:37:40.892 "ana_reporting": false 00:37:40.892 }, 00:37:40.892 "vs": { 00:37:40.892 "nvme_version": "1.3" 00:37:40.892 }, 00:37:40.892 "ns_data": { 00:37:40.892 "id": 1, 00:37:40.892 "can_share": true 00:37:40.892 } 00:37:40.892 } 00:37:40.892 ], 00:37:40.892 "mp_policy": "active_passive" 00:37:40.892 } 00:37:40.892 } 00:37:40.892 ] 00:37:40.892 18:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=918405 00:37:40.892 18:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:40.892 18:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:40.892 Running I/O for 10 seconds... 00:37:42.271 Latency(us) 00:37:42.271 [2024-11-17T17:58:28.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:42.271 Nvme0n1 : 1.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:37:42.271 [2024-11-17T17:58:28.847Z] =================================================================================================================== 00:37:42.271 [2024-11-17T17:58:28.847Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:37:42.271 00:37:42.836 18:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:43.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:43.093 Nvme0n1 : 2.00 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:37:43.093 [2024-11-17T17:58:29.669Z] =================================================================================================================== 00:37:43.093 [2024-11-17T17:58:29.669Z] Total : 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:37:43.093 00:37:43.093 true 00:37:43.093 18:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:43.093 18:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:43.353 18:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:43.353 18:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:43.353 18:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 918405 00:37:43.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:43.923 Nvme0n1 : 3.00 15070.67 58.87 0.00 0.00 0.00 0.00 0.00 00:37:43.923 [2024-11-17T17:58:30.499Z] =================================================================================================================== 00:37:43.923 [2024-11-17T17:58:30.499Z] Total : 15070.67 58.87 0.00 0.00 0.00 0.00 0.00 00:37:43.923 00:37:45.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:45.300 Nvme0n1 : 4.00 15208.25 59.41 0.00 0.00 0.00 0.00 0.00 00:37:45.300 [2024-11-17T17:58:31.876Z] =================================================================================================================== 00:37:45.300 [2024-11-17T17:58:31.876Z] Total : 15208.25 59.41 0.00 0.00 0.00 0.00 0.00 00:37:45.300 00:37:45.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:45.871 Nvme0n1 : 5.00 15278.20 59.68 0.00 0.00 0.00 0.00 0.00 00:37:45.871 [2024-11-17T17:58:32.447Z] =================================================================================================================== 00:37:45.871 [2024-11-17T17:58:32.447Z] Total : 15278.20 59.68 0.00 0.00 0.00 0.00 0.00 00:37:45.871 00:37:47.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.254 Nvme0n1 : 6.00 15324.67 59.86 0.00 0.00 0.00 0.00 0.00 00:37:47.254 [2024-11-17T17:58:33.830Z] =================================================================================================================== 00:37:47.254 [2024-11-17T17:58:33.830Z] Total : 15324.67 59.86 0.00 0.00 0.00 0.00 0.00 00:37:47.254 00:37:48.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:48.191 Nvme0n1 : 7.00 15348.86 59.96 0.00 0.00 0.00 0.00 0.00 00:37:48.191 [2024-11-17T17:58:34.767Z] =================================================================================================================== 00:37:48.191 [2024-11-17T17:58:34.767Z] Total : 15348.86 59.96 0.00 0.00 0.00 0.00 0.00 00:37:48.191 00:37:49.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:49.127 Nvme0n1 : 8.00 15398.75 60.15 0.00 0.00 0.00 0.00 0.00 00:37:49.127 [2024-11-17T17:58:35.703Z] =================================================================================================================== 00:37:49.127 [2024-11-17T17:58:35.703Z] Total : 15398.75 60.15 0.00 0.00 0.00 0.00 0.00 00:37:49.127 00:37:50.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:50.061 Nvme0n1 : 9.00 15437.56 60.30 0.00 0.00 0.00 0.00 0.00 00:37:50.061 [2024-11-17T17:58:36.637Z] =================================================================================================================== 00:37:50.061 [2024-11-17T17:58:36.637Z] Total : 15437.56 60.30 0.00 0.00 0.00 0.00 0.00 00:37:50.061 00:37:50.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:50.999 Nvme0n1 : 10.00 15468.60 60.42 0.00 0.00 0.00 0.00 0.00 00:37:50.999 [2024-11-17T17:58:37.575Z] =================================================================================================================== 00:37:50.999 [2024-11-17T17:58:37.575Z] Total : 15468.60 60.42 0.00 0.00 0.00 0.00 0.00 00:37:50.999 00:37:50.999 00:37:50.999 Latency(us) 00:37:50.999 [2024-11-17T17:58:37.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.000 Nvme0n1 : 10.01 15473.61 60.44 0.00 0.00 8267.65 6359.42 19029.71 00:37:51.000 [2024-11-17T17:58:37.576Z] =================================================================================================================== 00:37:51.000 [2024-11-17T17:58:37.576Z] Total : 15473.61 60.44 0.00 0.00 8267.65 6359.42 19029.71 00:37:51.000 { 00:37:51.000 "results": [ 00:37:51.000 { 00:37:51.000 "job": "Nvme0n1", 00:37:51.000 "core_mask": "0x2", 00:37:51.000 "workload": "randwrite", 00:37:51.000 "status": "finished", 00:37:51.000 "queue_depth": 128, 00:37:51.000 "io_size": 4096, 00:37:51.000 "runtime": 10.005037, 00:37:51.000 "iops": 15473.605944685662, 00:37:51.000 "mibps": 60.443773221428366, 00:37:51.000 "io_failed": 0, 00:37:51.000 "io_timeout": 0, 00:37:51.000 "avg_latency_us": 8267.646188453624, 00:37:51.000 "min_latency_us": 6359.419259259259, 00:37:51.000 "max_latency_us": 19029.712592592594 00:37:51.000 } 00:37:51.000 ], 00:37:51.000 "core_count": 1 00:37:51.000 } 00:37:51.000 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 918386 00:37:51.000 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 918386 ']' 00:37:51.000 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 918386 00:37:51.000 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:37:51.000 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:51.000 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 918386 00:37:51.000 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:51.000 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:51.000 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 918386' 00:37:51.000 killing process with pid 918386 00:37:51.000 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 918386 00:37:51.000 Received shutdown signal, test time was about 10.000000 seconds 00:37:51.000 00:37:51.000 Latency(us) 00:37:51.000 [2024-11-17T17:58:37.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.000 [2024-11-17T17:58:37.576Z] =================================================================================================================== 00:37:51.000 [2024-11-17T17:58:37.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:51.000 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 918386 00:37:51.270 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:51.528 18:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:51.787 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:51.787 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:52.044 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 915797 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 915797 00:37:52.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 915797 Killed "${NVMF_APP[@]}" "$@" 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=919720 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 919720 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 919720 ']' 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:52.045 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:52.303 [2024-11-17 18:58:38.623188] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:52.303 [2024-11-17 18:58:38.624322] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:37:52.303 [2024-11-17 18:58:38.624389] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:52.303 [2024-11-17 18:58:38.700353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.303 [2024-11-17 18:58:38.747382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:52.303 [2024-11-17 18:58:38.747456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:52.303 [2024-11-17 18:58:38.747470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:52.303 [2024-11-17 18:58:38.747496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:52.303 [2024-11-17 18:58:38.747506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:52.303 [2024-11-17 18:58:38.748114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:52.303 [2024-11-17 18:58:38.841474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:52.303 [2024-11-17 18:58:38.841832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:52.303 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:52.303 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:52.303 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:52.303 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:52.303 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:52.562 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:52.562 18:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:52.822 [2024-11-17 18:58:39.142837] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:52.822 [2024-11-17 18:58:39.143001] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:52.822 [2024-11-17 18:58:39.143050] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:52.822 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:37:52.822 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0bc036b0-21f6-4569-a3f3-a5820273edcc 00:37:52.822 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0bc036b0-21f6-4569-a3f3-a5820273edcc 00:37:52.822 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:52.822 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:37:52.822 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:52.822 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:52.822 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:53.080 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0bc036b0-21f6-4569-a3f3-a5820273edcc -t 2000 00:37:53.339 [ 00:37:53.339 { 00:37:53.339 "name": "0bc036b0-21f6-4569-a3f3-a5820273edcc", 00:37:53.339 "aliases": [ 00:37:53.339 "lvs/lvol" 00:37:53.339 ], 00:37:53.339 "product_name": "Logical Volume", 00:37:53.339 "block_size": 4096, 00:37:53.339 "num_blocks": 38912, 00:37:53.339 "uuid": "0bc036b0-21f6-4569-a3f3-a5820273edcc", 00:37:53.339 "assigned_rate_limits": { 00:37:53.339 "rw_ios_per_sec": 0, 00:37:53.339 "rw_mbytes_per_sec": 0, 00:37:53.339 "r_mbytes_per_sec": 0, 00:37:53.339 "w_mbytes_per_sec": 0 00:37:53.339 }, 00:37:53.339 "claimed": false, 00:37:53.339 "zoned": false, 00:37:53.339 "supported_io_types": { 00:37:53.339 "read": true, 00:37:53.339 "write": true, 00:37:53.339 "unmap": true, 00:37:53.339 "flush": false, 00:37:53.339 "reset": true, 00:37:53.339 "nvme_admin": false, 00:37:53.339 "nvme_io": false, 00:37:53.339 "nvme_io_md": false, 00:37:53.339 "write_zeroes": true, 00:37:53.339 "zcopy": false, 00:37:53.339 "get_zone_info": false, 00:37:53.339 "zone_management": false, 00:37:53.339 "zone_append": false, 00:37:53.339 "compare": false, 00:37:53.339 "compare_and_write": false, 00:37:53.339 "abort": false, 00:37:53.340 "seek_hole": true, 00:37:53.340 "seek_data": true, 00:37:53.340 "copy": false, 00:37:53.340 "nvme_iov_md": false 00:37:53.340 }, 00:37:53.340 "driver_specific": { 00:37:53.340 "lvol": { 00:37:53.340 "lvol_store_uuid": "c6bc26e3-963e-47af-8524-2bcbf2e27031", 00:37:53.340 "base_bdev": "aio_bdev", 00:37:53.340 "thin_provision": false, 00:37:53.340 "num_allocated_clusters": 38, 00:37:53.340 "snapshot": false, 00:37:53.340 "clone": false, 00:37:53.340 "esnap_clone": false 00:37:53.340 } 00:37:53.340 } 00:37:53.340 } 00:37:53.340 ] 00:37:53.340 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:37:53.340 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:53.340 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:37:53.598 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:37:53.598 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:53.598 18:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:37:53.857 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:37:53.857 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:54.117 [2024-11-17 18:58:40.544656] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:54.117 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:54.377 request: 00:37:54.377 { 00:37:54.377 "uuid": "c6bc26e3-963e-47af-8524-2bcbf2e27031", 00:37:54.377 "method": "bdev_lvol_get_lvstores", 00:37:54.377 "req_id": 1 00:37:54.377 } 00:37:54.377 Got JSON-RPC error response 00:37:54.377 response: 00:37:54.377 { 00:37:54.377 "code": -19, 00:37:54.377 "message": "No such device" 00:37:54.377 } 00:37:54.377 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:37:54.377 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:54.377 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:54.377 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:54.377 18:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:54.637 aio_bdev 00:37:54.637 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0bc036b0-21f6-4569-a3f3-a5820273edcc 00:37:54.637 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0bc036b0-21f6-4569-a3f3-a5820273edcc 00:37:54.637 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:54.637 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:37:54.637 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:54.637 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:54.637 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:54.895 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0bc036b0-21f6-4569-a3f3-a5820273edcc -t 2000 00:37:55.155 [ 00:37:55.155 { 00:37:55.155 "name": "0bc036b0-21f6-4569-a3f3-a5820273edcc", 00:37:55.155 "aliases": [ 00:37:55.155 "lvs/lvol" 00:37:55.155 ], 00:37:55.155 "product_name": "Logical Volume", 00:37:55.155 "block_size": 4096, 00:37:55.155 "num_blocks": 38912, 00:37:55.155 "uuid": "0bc036b0-21f6-4569-a3f3-a5820273edcc", 00:37:55.155 "assigned_rate_limits": { 00:37:55.155 "rw_ios_per_sec": 0, 00:37:55.155 "rw_mbytes_per_sec": 0, 00:37:55.155 "r_mbytes_per_sec": 0, 00:37:55.155 "w_mbytes_per_sec": 0 00:37:55.155 }, 00:37:55.155 "claimed": false, 00:37:55.155 "zoned": false, 00:37:55.155 "supported_io_types": { 00:37:55.155 "read": true, 00:37:55.155 "write": true, 00:37:55.155 "unmap": true, 00:37:55.155 "flush": false, 00:37:55.155 "reset": true, 00:37:55.155 "nvme_admin": false, 00:37:55.155 "nvme_io": false, 00:37:55.155 "nvme_io_md": false, 00:37:55.155 "write_zeroes": true, 00:37:55.155 "zcopy": false, 00:37:55.155 "get_zone_info": false, 00:37:55.155 "zone_management": false, 00:37:55.155 "zone_append": false, 00:37:55.155 "compare": false, 00:37:55.155 "compare_and_write": false, 00:37:55.155 "abort": false, 00:37:55.155 "seek_hole": true, 00:37:55.155 "seek_data": true, 00:37:55.155 "copy": false, 00:37:55.155 "nvme_iov_md": false 00:37:55.155 }, 00:37:55.155 "driver_specific": { 00:37:55.155 "lvol": { 00:37:55.155 "lvol_store_uuid": "c6bc26e3-963e-47af-8524-2bcbf2e27031", 00:37:55.155 "base_bdev": "aio_bdev", 00:37:55.155 "thin_provision": false, 00:37:55.155 "num_allocated_clusters": 38, 00:37:55.155 "snapshot": false, 00:37:55.155 "clone": false, 00:37:55.155 "esnap_clone": false 00:37:55.155 } 00:37:55.155 } 00:37:55.155 } 00:37:55.155 ] 00:37:55.155 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:37:55.156 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:55.156 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:55.415 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:55.415 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:55.415 18:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:55.673 18:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:55.673 18:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0bc036b0-21f6-4569-a3f3-a5820273edcc 00:37:55.931 18:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c6bc26e3-963e-47af-8524-2bcbf2e27031 00:37:56.497 18:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:56.497 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:56.757 00:37:56.757 real 0m19.496s 00:37:56.757 user 0m36.607s 00:37:56.757 sys 0m4.555s 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:56.757 ************************************ 00:37:56.757 END TEST lvs_grow_dirty 00:37:56.757 ************************************ 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:37:56.757 nvmf_trace.0 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:56.757 rmmod nvme_tcp 00:37:56.757 rmmod nvme_fabrics 00:37:56.757 rmmod nvme_keyring 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 919720 ']' 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 919720 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 919720 ']' 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 919720 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 919720 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 919720' 00:37:56.757 killing process with pid 919720 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 919720 00:37:56.757 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 919720 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:57.016 18:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:58.923 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:58.923 00:37:58.923 real 0m42.667s 00:37:58.923 user 0m55.623s 00:37:58.923 sys 0m8.401s 00:37:58.923 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:58.923 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:58.923 ************************************ 00:37:58.923 END TEST nvmf_lvs_grow 00:37:58.923 ************************************ 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:59.183 ************************************ 00:37:59.183 START TEST nvmf_bdev_io_wait 00:37:59.183 ************************************ 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:59.183 * Looking for test storage... 00:37:59.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:59.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.183 --rc genhtml_branch_coverage=1 00:37:59.183 --rc genhtml_function_coverage=1 00:37:59.183 --rc genhtml_legend=1 00:37:59.183 --rc geninfo_all_blocks=1 00:37:59.183 --rc geninfo_unexecuted_blocks=1 00:37:59.183 00:37:59.183 ' 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:59.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.183 --rc genhtml_branch_coverage=1 00:37:59.183 --rc genhtml_function_coverage=1 00:37:59.183 --rc genhtml_legend=1 00:37:59.183 --rc geninfo_all_blocks=1 00:37:59.183 --rc geninfo_unexecuted_blocks=1 00:37:59.183 00:37:59.183 ' 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:59.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.183 --rc genhtml_branch_coverage=1 00:37:59.183 --rc genhtml_function_coverage=1 00:37:59.183 --rc genhtml_legend=1 00:37:59.183 --rc geninfo_all_blocks=1 00:37:59.183 --rc geninfo_unexecuted_blocks=1 00:37:59.183 00:37:59.183 ' 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:59.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.183 --rc genhtml_branch_coverage=1 00:37:59.183 --rc genhtml_function_coverage=1 00:37:59.183 --rc genhtml_legend=1 00:37:59.183 --rc geninfo_all_blocks=1 00:37:59.183 --rc geninfo_unexecuted_blocks=1 00:37:59.183 00:37:59.183 ' 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:59.183 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:37:59.184 18:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:01.721 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:01.721 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:01.721 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:01.721 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:01.721 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:01.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:01.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:38:01.722 00:38:01.722 --- 10.0.0.2 ping statistics --- 00:38:01.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.722 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:38:01.722 18:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:01.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:01.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:38:01.722 00:38:01.722 --- 10.0.0.1 ping statistics --- 00:38:01.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.722 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=922256 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 922256 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 922256 ']' 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:01.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:01.722 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.722 [2024-11-17 18:58:48.087353] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:01.722 [2024-11-17 18:58:48.088446] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:01.722 [2024-11-17 18:58:48.088494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:01.722 [2024-11-17 18:58:48.163301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:01.722 [2024-11-17 18:58:48.208343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:01.722 [2024-11-17 18:58:48.208402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:01.722 [2024-11-17 18:58:48.208425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:01.722 [2024-11-17 18:58:48.208436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:01.722 [2024-11-17 18:58:48.208446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:01.722 [2024-11-17 18:58:48.209849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:01.722 [2024-11-17 18:58:48.209917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:01.722 [2024-11-17 18:58:48.209980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:01.722 [2024-11-17 18:58:48.209983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.722 [2024-11-17 18:58:48.210442] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.982 [2024-11-17 18:58:48.400422] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:01.982 [2024-11-17 18:58:48.400604] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:01.982 [2024-11-17 18:58:48.401401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:01.982 [2024-11-17 18:58:48.402163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.982 [2024-11-17 18:58:48.410647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.982 Malloc0 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.982 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.983 [2024-11-17 18:58:48.470866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=922390 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=922391 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=922394 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:01.983 { 00:38:01.983 "params": { 00:38:01.983 "name": "Nvme$subsystem", 00:38:01.983 "trtype": "$TEST_TRANSPORT", 00:38:01.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:01.983 "adrfam": "ipv4", 00:38:01.983 "trsvcid": "$NVMF_PORT", 00:38:01.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:01.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:01.983 "hdgst": ${hdgst:-false}, 00:38:01.983 "ddgst": ${ddgst:-false} 00:38:01.983 }, 00:38:01.983 "method": "bdev_nvme_attach_controller" 00:38:01.983 } 00:38:01.983 EOF 00:38:01.983 )") 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=922396 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:01.983 { 00:38:01.983 "params": { 00:38:01.983 "name": "Nvme$subsystem", 00:38:01.983 "trtype": "$TEST_TRANSPORT", 00:38:01.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:01.983 "adrfam": "ipv4", 00:38:01.983 "trsvcid": "$NVMF_PORT", 00:38:01.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:01.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:01.983 "hdgst": ${hdgst:-false}, 00:38:01.983 "ddgst": ${ddgst:-false} 00:38:01.983 }, 00:38:01.983 "method": "bdev_nvme_attach_controller" 00:38:01.983 } 00:38:01.983 EOF 00:38:01.983 )") 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:01.983 { 00:38:01.983 "params": { 00:38:01.983 "name": "Nvme$subsystem", 00:38:01.983 "trtype": "$TEST_TRANSPORT", 00:38:01.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:01.983 "adrfam": "ipv4", 00:38:01.983 "trsvcid": "$NVMF_PORT", 00:38:01.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:01.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:01.983 "hdgst": ${hdgst:-false}, 00:38:01.983 "ddgst": ${ddgst:-false} 00:38:01.983 }, 00:38:01.983 "method": "bdev_nvme_attach_controller" 00:38:01.983 } 00:38:01.983 EOF 00:38:01.983 )") 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:01.983 { 00:38:01.983 "params": { 00:38:01.983 "name": "Nvme$subsystem", 00:38:01.983 "trtype": "$TEST_TRANSPORT", 00:38:01.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:01.983 "adrfam": "ipv4", 00:38:01.983 "trsvcid": "$NVMF_PORT", 00:38:01.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:01.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:01.983 "hdgst": ${hdgst:-false}, 00:38:01.983 "ddgst": ${ddgst:-false} 00:38:01.983 }, 00:38:01.983 "method": "bdev_nvme_attach_controller" 00:38:01.983 } 00:38:01.983 EOF 00:38:01.983 )") 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 922390 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:01.983 "params": { 00:38:01.983 "name": "Nvme1", 00:38:01.983 "trtype": "tcp", 00:38:01.983 "traddr": "10.0.0.2", 00:38:01.983 "adrfam": "ipv4", 00:38:01.983 "trsvcid": "4420", 00:38:01.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:01.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:01.983 "hdgst": false, 00:38:01.983 "ddgst": false 00:38:01.983 }, 00:38:01.983 "method": "bdev_nvme_attach_controller" 00:38:01.983 }' 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:01.983 "params": { 00:38:01.983 "name": "Nvme1", 00:38:01.983 "trtype": "tcp", 00:38:01.983 "traddr": "10.0.0.2", 00:38:01.983 "adrfam": "ipv4", 00:38:01.983 "trsvcid": "4420", 00:38:01.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:01.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:01.983 "hdgst": false, 00:38:01.983 "ddgst": false 00:38:01.983 }, 00:38:01.983 "method": "bdev_nvme_attach_controller" 00:38:01.983 }' 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:01.983 "params": { 00:38:01.983 "name": "Nvme1", 00:38:01.983 "trtype": "tcp", 00:38:01.983 "traddr": "10.0.0.2", 00:38:01.983 "adrfam": "ipv4", 00:38:01.983 "trsvcid": "4420", 00:38:01.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:01.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:01.983 "hdgst": false, 00:38:01.983 "ddgst": false 00:38:01.983 }, 00:38:01.983 "method": "bdev_nvme_attach_controller" 00:38:01.983 }' 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:01.983 18:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:01.983 "params": { 00:38:01.983 "name": "Nvme1", 00:38:01.983 "trtype": "tcp", 00:38:01.983 "traddr": "10.0.0.2", 00:38:01.983 "adrfam": "ipv4", 00:38:01.983 "trsvcid": "4420", 00:38:01.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:01.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:01.983 "hdgst": false, 00:38:01.983 "ddgst": false 00:38:01.983 }, 00:38:01.983 "method": "bdev_nvme_attach_controller" 00:38:01.983 }' 00:38:01.983 [2024-11-17 18:58:48.523576] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:01.983 [2024-11-17 18:58:48.523576] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:01.983 [2024-11-17 18:58:48.523576] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:01.983 [2024-11-17 18:58:48.523621] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:01.984 [2024-11-17 18:58:48.523703] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-11-17 18:58:48.523706] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:38:01.984 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 [2024-11-17 18:58:48.523708] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:38:01.984 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:01.984 [2024-11-17 18:58:48.523736] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:02.242 [2024-11-17 18:58:48.706994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.242 [2024-11-17 18:58:48.748693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:02.242 [2024-11-17 18:58:48.804625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.500 [2024-11-17 18:58:48.846637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:02.500 [2024-11-17 18:58:48.902502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.500 [2024-11-17 18:58:48.947363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:02.500 [2024-11-17 18:58:48.978173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.500 [2024-11-17 18:58:49.016399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:02.768 Running I/O for 1 seconds... 00:38:02.768 Running I/O for 1 seconds... 00:38:02.768 Running I/O for 1 seconds... 00:38:02.768 Running I/O for 1 seconds... 00:38:03.707 188240.00 IOPS, 735.31 MiB/s 00:38:03.707 Latency(us) 00:38:03.707 [2024-11-17T17:58:50.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.707 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:03.707 Nvme1n1 : 1.00 187884.07 733.92 0.00 0.00 677.66 300.37 1868.99 00:38:03.707 [2024-11-17T17:58:50.283Z] =================================================================================================================== 00:38:03.707 [2024-11-17T17:58:50.283Z] Total : 187884.07 733.92 0.00 0.00 677.66 300.37 1868.99 00:38:03.707 6586.00 IOPS, 25.73 MiB/s 00:38:03.707 Latency(us) 00:38:03.707 [2024-11-17T17:58:50.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.707 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:03.707 Nvme1n1 : 1.02 6577.68 25.69 0.00 0.00 19329.16 4587.52 32428.18 00:38:03.707 [2024-11-17T17:58:50.284Z] =================================================================================================================== 00:38:03.708 [2024-11-17T17:58:50.284Z] Total : 6577.68 25.69 0.00 0.00 19329.16 4587.52 32428.18 00:38:03.708 9182.00 IOPS, 35.87 MiB/s 00:38:03.708 Latency(us) 00:38:03.708 [2024-11-17T17:58:50.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.708 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:03.708 Nvme1n1 : 1.01 9224.34 36.03 0.00 0.00 13806.78 5461.33 19806.44 00:38:03.708 [2024-11-17T17:58:50.284Z] =================================================================================================================== 00:38:03.708 [2024-11-17T17:58:50.284Z] Total : 9224.34 36.03 0.00 0.00 13806.78 5461.33 19806.44 00:38:03.708 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 922391 00:38:03.967 6789.00 IOPS, 26.52 MiB/s 00:38:03.967 Latency(us) 00:38:03.967 [2024-11-17T17:58:50.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.967 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:03.967 Nvme1n1 : 1.01 6919.60 27.03 0.00 0.00 18448.37 3835.07 42137.22 00:38:03.967 [2024-11-17T17:58:50.543Z] =================================================================================================================== 00:38:03.967 [2024-11-17T17:58:50.543Z] Total : 6919.60 27.03 0.00 0.00 18448.37 3835.07 42137.22 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 922394 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 922396 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:03.967 rmmod nvme_tcp 00:38:03.967 rmmod nvme_fabrics 00:38:03.967 rmmod nvme_keyring 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 922256 ']' 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 922256 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 922256 ']' 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 922256 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:03.967 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 922256 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 922256' 00:38:04.260 killing process with pid 922256 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 922256 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 922256 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:04.260 18:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:06.813 00:38:06.813 real 0m7.249s 00:38:06.813 user 0m14.281s 00:38:06.813 sys 0m3.880s 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:06.813 ************************************ 00:38:06.813 END TEST nvmf_bdev_io_wait 00:38:06.813 ************************************ 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:06.813 ************************************ 00:38:06.813 START TEST nvmf_queue_depth 00:38:06.813 ************************************ 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:06.813 * Looking for test storage... 00:38:06.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:06.813 18:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:06.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.813 --rc genhtml_branch_coverage=1 00:38:06.813 --rc genhtml_function_coverage=1 00:38:06.813 --rc genhtml_legend=1 00:38:06.813 --rc geninfo_all_blocks=1 00:38:06.813 --rc geninfo_unexecuted_blocks=1 00:38:06.813 00:38:06.813 ' 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:06.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.813 --rc genhtml_branch_coverage=1 00:38:06.813 --rc genhtml_function_coverage=1 00:38:06.813 --rc genhtml_legend=1 00:38:06.813 --rc geninfo_all_blocks=1 00:38:06.813 --rc geninfo_unexecuted_blocks=1 00:38:06.813 00:38:06.813 ' 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:06.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.813 --rc genhtml_branch_coverage=1 00:38:06.813 --rc genhtml_function_coverage=1 00:38:06.813 --rc genhtml_legend=1 00:38:06.813 --rc geninfo_all_blocks=1 00:38:06.813 --rc geninfo_unexecuted_blocks=1 00:38:06.813 00:38:06.813 ' 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:06.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.813 --rc genhtml_branch_coverage=1 00:38:06.813 --rc genhtml_function_coverage=1 00:38:06.813 --rc genhtml_legend=1 00:38:06.813 --rc geninfo_all_blocks=1 00:38:06.813 --rc geninfo_unexecuted_blocks=1 00:38:06.813 00:38:06.813 ' 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:06.813 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:06.814 18:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:08.747 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:08.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:08.748 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:08.748 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:08.748 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:08.748 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:09.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:09.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:38:09.009 00:38:09.009 --- 10.0.0.2 ping statistics --- 00:38:09.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.009 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:09.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:09.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:38:09.009 00:38:09.009 --- 10.0.0.1 ping statistics --- 00:38:09.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.009 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=924615 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 924615 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 924615 ']' 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.009 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.009 [2024-11-17 18:58:55.480015] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:09.009 [2024-11-17 18:58:55.481175] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:09.009 [2024-11-17 18:58:55.481227] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:09.009 [2024-11-17 18:58:55.559450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.268 [2024-11-17 18:58:55.607186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:09.268 [2024-11-17 18:58:55.607260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:09.268 [2024-11-17 18:58:55.607298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:09.268 [2024-11-17 18:58:55.607310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:09.268 [2024-11-17 18:58:55.607320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:09.268 [2024-11-17 18:58:55.607967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:09.268 [2024-11-17 18:58:55.701032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:09.268 [2024-11-17 18:58:55.701323] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.268 [2024-11-17 18:58:55.752583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.268 Malloc0 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.268 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.269 [2024-11-17 18:58:55.820683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=924638 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 924638 /var/tmp/bdevperf.sock 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 924638 ']' 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:09.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.269 18:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.527 [2024-11-17 18:58:55.867404] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:09.527 [2024-11-17 18:58:55.867465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924638 ] 00:38:09.527 [2024-11-17 18:58:55.933780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.527 [2024-11-17 18:58:55.978829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:09.786 18:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:09.786 18:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:09.786 18:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:09.786 18:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.786 18:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:09.786 NVMe0n1 00:38:09.786 18:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.786 18:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:09.786 Running I/O for 10 seconds... 00:38:12.098 8214.00 IOPS, 32.09 MiB/s [2024-11-17T17:58:59.611Z] 8697.00 IOPS, 33.97 MiB/s [2024-11-17T17:59:00.545Z] 8537.67 IOPS, 33.35 MiB/s [2024-11-17T17:59:01.483Z] 8535.00 IOPS, 33.34 MiB/s [2024-11-17T17:59:02.423Z] 8595.20 IOPS, 33.58 MiB/s [2024-11-17T17:59:03.360Z] 8561.33 IOPS, 33.44 MiB/s [2024-11-17T17:59:04.738Z] 8631.57 IOPS, 33.72 MiB/s [2024-11-17T17:59:05.306Z] 8610.25 IOPS, 33.63 MiB/s [2024-11-17T17:59:06.680Z] 8649.44 IOPS, 33.79 MiB/s [2024-11-17T17:59:06.680Z] 8649.10 IOPS, 33.79 MiB/s 00:38:20.104 Latency(us) 00:38:20.104 [2024-11-17T17:59:06.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.104 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:20.104 Verification LBA range: start 0x0 length 0x4000 00:38:20.104 NVMe0n1 : 10.08 8679.34 33.90 0.00 0.00 117386.49 20680.25 71070.15 00:38:20.104 [2024-11-17T17:59:06.680Z] =================================================================================================================== 00:38:20.104 [2024-11-17T17:59:06.680Z] Total : 8679.34 33.90 0.00 0.00 117386.49 20680.25 71070.15 00:38:20.104 { 00:38:20.104 "results": [ 00:38:20.104 { 00:38:20.104 "job": "NVMe0n1", 00:38:20.104 "core_mask": "0x1", 00:38:20.104 "workload": "verify", 00:38:20.104 "status": "finished", 00:38:20.104 "verify_range": { 00:38:20.104 "start": 0, 00:38:20.104 "length": 16384 00:38:20.104 }, 00:38:20.104 "queue_depth": 1024, 00:38:20.104 "io_size": 4096, 00:38:20.104 "runtime": 10.08164, 00:38:20.104 "iops": 8679.3418531112, 00:38:20.104 "mibps": 33.903679113715626, 00:38:20.104 "io_failed": 0, 00:38:20.104 "io_timeout": 0, 00:38:20.104 "avg_latency_us": 117386.48971159178, 00:38:20.104 "min_latency_us": 20680.248888888887, 00:38:20.104 "max_latency_us": 71070.15111111112 00:38:20.104 } 00:38:20.104 ], 00:38:20.104 "core_count": 1 00:38:20.104 } 00:38:20.104 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 924638 00:38:20.104 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 924638 ']' 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 924638 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 924638 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 924638' 00:38:20.105 killing process with pid 924638 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 924638 00:38:20.105 Received shutdown signal, test time was about 10.000000 seconds 00:38:20.105 00:38:20.105 Latency(us) 00:38:20.105 [2024-11-17T17:59:06.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:20.105 [2024-11-17T17:59:06.681Z] =================================================================================================================== 00:38:20.105 [2024-11-17T17:59:06.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 924638 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:20.105 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:20.105 rmmod nvme_tcp 00:38:20.105 rmmod nvme_fabrics 00:38:20.105 rmmod nvme_keyring 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 924615 ']' 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 924615 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 924615 ']' 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 924615 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 924615 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 924615' 00:38:20.365 killing process with pid 924615 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 924615 00:38:20.365 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 924615 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:20.625 18:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.534 18:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:22.534 00:38:22.534 real 0m16.159s 00:38:22.534 user 0m22.131s 00:38:22.534 sys 0m3.412s 00:38:22.534 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:22.534 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:22.534 ************************************ 00:38:22.534 END TEST nvmf_queue_depth 00:38:22.534 ************************************ 00:38:22.534 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:22.534 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:22.534 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:22.534 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:22.534 ************************************ 00:38:22.534 START TEST nvmf_target_multipath 00:38:22.534 ************************************ 00:38:22.534 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:22.534 * Looking for test storage... 00:38:22.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:22.809 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:22.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.810 --rc genhtml_branch_coverage=1 00:38:22.810 --rc genhtml_function_coverage=1 00:38:22.810 --rc genhtml_legend=1 00:38:22.810 --rc geninfo_all_blocks=1 00:38:22.810 --rc geninfo_unexecuted_blocks=1 00:38:22.810 00:38:22.810 ' 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:22.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.810 --rc genhtml_branch_coverage=1 00:38:22.810 --rc genhtml_function_coverage=1 00:38:22.810 --rc genhtml_legend=1 00:38:22.810 --rc geninfo_all_blocks=1 00:38:22.810 --rc geninfo_unexecuted_blocks=1 00:38:22.810 00:38:22.810 ' 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:22.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.810 --rc genhtml_branch_coverage=1 00:38:22.810 --rc genhtml_function_coverage=1 00:38:22.810 --rc genhtml_legend=1 00:38:22.810 --rc geninfo_all_blocks=1 00:38:22.810 --rc geninfo_unexecuted_blocks=1 00:38:22.810 00:38:22.810 ' 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:22.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.810 --rc genhtml_branch_coverage=1 00:38:22.810 --rc genhtml_function_coverage=1 00:38:22.810 --rc genhtml_legend=1 00:38:22.810 --rc geninfo_all_blocks=1 00:38:22.810 --rc geninfo_unexecuted_blocks=1 00:38:22.810 00:38:22.810 ' 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.810 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:22.811 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:22.812 18:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:25.347 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:25.348 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:25.348 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:25.348 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:25.348 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:25.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:25.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:38:25.348 00:38:25.348 --- 10.0.0.2 ping statistics --- 00:38:25.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.348 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:25.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:25.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:38:25.348 00:38:25.348 --- 10.0.0.1 ping statistics --- 00:38:25.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:25.348 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:25.348 only one NIC for nvmf test 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:25.348 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:25.349 rmmod nvme_tcp 00:38:25.349 rmmod nvme_fabrics 00:38:25.349 rmmod nvme_keyring 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.349 18:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:27.250 00:38:27.250 real 0m4.555s 00:38:27.250 user 0m0.953s 00:38:27.250 sys 0m1.622s 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:27.250 ************************************ 00:38:27.250 END TEST nvmf_target_multipath 00:38:27.250 ************************************ 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:27.250 ************************************ 00:38:27.250 START TEST nvmf_zcopy 00:38:27.250 ************************************ 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:27.250 * Looking for test storage... 00:38:27.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:27.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.250 --rc genhtml_branch_coverage=1 00:38:27.250 --rc genhtml_function_coverage=1 00:38:27.250 --rc genhtml_legend=1 00:38:27.250 --rc geninfo_all_blocks=1 00:38:27.250 --rc geninfo_unexecuted_blocks=1 00:38:27.250 00:38:27.250 ' 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:27.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.250 --rc genhtml_branch_coverage=1 00:38:27.250 --rc genhtml_function_coverage=1 00:38:27.250 --rc genhtml_legend=1 00:38:27.250 --rc geninfo_all_blocks=1 00:38:27.250 --rc geninfo_unexecuted_blocks=1 00:38:27.250 00:38:27.250 ' 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:27.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.250 --rc genhtml_branch_coverage=1 00:38:27.250 --rc genhtml_function_coverage=1 00:38:27.250 --rc genhtml_legend=1 00:38:27.250 --rc geninfo_all_blocks=1 00:38:27.250 --rc geninfo_unexecuted_blocks=1 00:38:27.250 00:38:27.250 ' 00:38:27.250 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:27.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.250 --rc genhtml_branch_coverage=1 00:38:27.250 --rc genhtml_function_coverage=1 00:38:27.250 --rc genhtml_legend=1 00:38:27.250 --rc geninfo_all_blocks=1 00:38:27.250 --rc geninfo_unexecuted_blocks=1 00:38:27.250 00:38:27.250 ' 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:27.251 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.511 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:27.511 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:27.511 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:27.511 18:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:29.416 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:29.417 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:29.417 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:29.417 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:29.417 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:29.417 18:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:29.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:29.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:38:29.676 00:38:29.676 --- 10.0.0.2 ping statistics --- 00:38:29.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:29.676 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:29.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:29.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:38:29.676 00:38:29.676 --- 10.0.0.1 ping statistics --- 00:38:29.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:29.676 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:29.676 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=929805 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 929805 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 929805 ']' 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:29.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:29.677 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.677 [2024-11-17 18:59:16.152381] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:29.677 [2024-11-17 18:59:16.153639] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:29.677 [2024-11-17 18:59:16.153724] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:29.677 [2024-11-17 18:59:16.225147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.935 [2024-11-17 18:59:16.268800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:29.935 [2024-11-17 18:59:16.268858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:29.935 [2024-11-17 18:59:16.268881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:29.935 [2024-11-17 18:59:16.268892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:29.935 [2024-11-17 18:59:16.268901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:29.935 [2024-11-17 18:59:16.269465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:29.935 [2024-11-17 18:59:16.347647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:29.935 [2024-11-17 18:59:16.347986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.935 [2024-11-17 18:59:16.406146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.935 [2024-11-17 18:59:16.422291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.935 malloc0 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:29.935 { 00:38:29.935 "params": { 00:38:29.935 "name": "Nvme$subsystem", 00:38:29.935 "trtype": "$TEST_TRANSPORT", 00:38:29.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:29.935 "adrfam": "ipv4", 00:38:29.935 "trsvcid": "$NVMF_PORT", 00:38:29.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:29.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:29.935 "hdgst": ${hdgst:-false}, 00:38:29.935 "ddgst": ${ddgst:-false} 00:38:29.935 }, 00:38:29.935 "method": "bdev_nvme_attach_controller" 00:38:29.935 } 00:38:29.935 EOF 00:38:29.935 )") 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:29.935 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:29.936 18:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:29.936 "params": { 00:38:29.936 "name": "Nvme1", 00:38:29.936 "trtype": "tcp", 00:38:29.936 "traddr": "10.0.0.2", 00:38:29.936 "adrfam": "ipv4", 00:38:29.936 "trsvcid": "4420", 00:38:29.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:29.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:29.936 "hdgst": false, 00:38:29.936 "ddgst": false 00:38:29.936 }, 00:38:29.936 "method": "bdev_nvme_attach_controller" 00:38:29.936 }' 00:38:30.194 [2024-11-17 18:59:16.511406] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:30.194 [2024-11-17 18:59:16.511498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929826 ] 00:38:30.194 [2024-11-17 18:59:16.585114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.194 [2024-11-17 18:59:16.632245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:30.452 Running I/O for 10 seconds... 00:38:32.771 5669.00 IOPS, 44.29 MiB/s [2024-11-17T17:59:20.286Z] 5739.00 IOPS, 44.84 MiB/s [2024-11-17T17:59:21.246Z] 5740.00 IOPS, 44.84 MiB/s [2024-11-17T17:59:22.186Z] 5743.00 IOPS, 44.87 MiB/s [2024-11-17T17:59:23.122Z] 5741.40 IOPS, 44.85 MiB/s [2024-11-17T17:59:24.078Z] 5747.83 IOPS, 44.90 MiB/s [2024-11-17T17:59:25.013Z] 5749.86 IOPS, 44.92 MiB/s [2024-11-17T17:59:26.396Z] 5753.62 IOPS, 44.95 MiB/s [2024-11-17T17:59:27.336Z] 5756.00 IOPS, 44.97 MiB/s [2024-11-17T17:59:27.336Z] 5763.00 IOPS, 45.02 MiB/s 00:38:40.760 Latency(us) 00:38:40.760 [2024-11-17T17:59:27.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.760 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:40.760 Verification LBA range: start 0x0 length 0x1000 00:38:40.760 Nvme1n1 : 10.01 5764.31 45.03 0.00 0.00 22144.19 1110.47 29515.47 00:38:40.760 [2024-11-17T17:59:27.336Z] =================================================================================================================== 00:38:40.760 [2024-11-17T17:59:27.336Z] Total : 5764.31 45.03 0.00 0.00 22144.19 1110.47 29515.47 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=931128 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:40.760 { 00:38:40.760 "params": { 00:38:40.760 "name": "Nvme$subsystem", 00:38:40.760 "trtype": "$TEST_TRANSPORT", 00:38:40.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:40.760 "adrfam": "ipv4", 00:38:40.760 "trsvcid": "$NVMF_PORT", 00:38:40.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:40.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:40.760 "hdgst": ${hdgst:-false}, 00:38:40.760 "ddgst": ${ddgst:-false} 00:38:40.760 }, 00:38:40.760 "method": "bdev_nvme_attach_controller" 00:38:40.760 } 00:38:40.760 EOF 00:38:40.760 )") 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:40.760 [2024-11-17 18:59:27.198146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.198191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:40.760 18:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:40.760 "params": { 00:38:40.760 "name": "Nvme1", 00:38:40.760 "trtype": "tcp", 00:38:40.760 "traddr": "10.0.0.2", 00:38:40.760 "adrfam": "ipv4", 00:38:40.760 "trsvcid": "4420", 00:38:40.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:40.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:40.760 "hdgst": false, 00:38:40.760 "ddgst": false 00:38:40.760 }, 00:38:40.760 "method": "bdev_nvme_attach_controller" 00:38:40.760 }' 00:38:40.760 [2024-11-17 18:59:27.206006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.206043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 [2024-11-17 18:59:27.214003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.214038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 [2024-11-17 18:59:27.222003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.222045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 [2024-11-17 18:59:27.230001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.230036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 [2024-11-17 18:59:27.238001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.238021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 [2024-11-17 18:59:27.241487] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:40.760 [2024-11-17 18:59:27.241564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931128 ] 00:38:40.760 [2024-11-17 18:59:27.246000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.246021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 [2024-11-17 18:59:27.254006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.254025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 [2024-11-17 18:59:27.262002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.262035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 [2024-11-17 18:59:27.270009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.270044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 [2024-11-17 18:59:27.278010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.278045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 [2024-11-17 18:59:27.286005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.286040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.760 [2024-11-17 18:59:27.294004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.760 [2024-11-17 18:59:27.294039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.761 [2024-11-17 18:59:27.302004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.761 [2024-11-17 18:59:27.302024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.761 [2024-11-17 18:59:27.310006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.761 [2024-11-17 18:59:27.310043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.761 [2024-11-17 18:59:27.311827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:40.761 [2024-11-17 18:59:27.318058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.761 [2024-11-17 18:59:27.318087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.761 [2024-11-17 18:59:27.326073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.761 [2024-11-17 18:59:27.326111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:40.761 [2024-11-17 18:59:27.334026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:40.761 [2024-11-17 18:59:27.334050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.342008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.342046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.350004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.350040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.358005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.358049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.361505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.022 [2024-11-17 18:59:27.366001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.366035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.374003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.374025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.382074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.382110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.390047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.390083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.398058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.398096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.406077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.406117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.414081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.414120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.422069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.422108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.430006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.430040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.438070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.438109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.446079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.446118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.454033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.454057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.462037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.462060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.470033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.470057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.478014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.478052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.486008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.486045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.494008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.494045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.502007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.502044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.510007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.510043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.518003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.518023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.526004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.526038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.534004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.534038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.541997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.542040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.550010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.550033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.558005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.558025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.566003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.566039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.574004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.574038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.582005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.582039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.022 [2024-11-17 18:59:27.590009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.022 [2024-11-17 18:59:27.590044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.598007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.598044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.606003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.606038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.614002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.614021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.622000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.622020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.630001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.630020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.638006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.638028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.646009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.646029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.653992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.654039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.662004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.662024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.670003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.670037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.678013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.678035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.686012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.686049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.728899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.728926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.734008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.734044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.742008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.742047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 Running I/O for 5 seconds... 00:38:41.283 [2024-11-17 18:59:27.758252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.758282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.770258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.770285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.782045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.782071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.793890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.793920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.805617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.805643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.817333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.817360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.833521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.833548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.283 [2024-11-17 18:59:27.844182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.283 [2024-11-17 18:59:27.844209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:27.859723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:27.859757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:27.869787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:27.869814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:27.882354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:27.882379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:27.893746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:27.893795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:27.904616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:27.904642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:27.920916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:27.920944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:27.931222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:27.931249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:27.948087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:27.948113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:27.964671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:27.964721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:27.974906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:27.974933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:27.987556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:27.987596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:28.004168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:28.004194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:28.020317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:28.020343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:28.030605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:28.030631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.544 [2024-11-17 18:59:28.043489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.544 [2024-11-17 18:59:28.043515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.545 [2024-11-17 18:59:28.060367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.545 [2024-11-17 18:59:28.060392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.545 [2024-11-17 18:59:28.074670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.545 [2024-11-17 18:59:28.074712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.545 [2024-11-17 18:59:28.084687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.545 [2024-11-17 18:59:28.084715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.545 [2024-11-17 18:59:28.097022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.545 [2024-11-17 18:59:28.097063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.545 [2024-11-17 18:59:28.110043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.545 [2024-11-17 18:59:28.110070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.119938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.119983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.132321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.132348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.147540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.147576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.158062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.158088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.170695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.170725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.182070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.182096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.193137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.193163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.208993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.209034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.218951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.218992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.236575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.236601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.247200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.247225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.264050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.264076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.281311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.281338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.291339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.291365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.304506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.304532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.321797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.321828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.331947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.331990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.347696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.347747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.358928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.358955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:41.804 [2024-11-17 18:59:28.371828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:41.804 [2024-11-17 18:59:28.371855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.387064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.387091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.396910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.396938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.411551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.411576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.422281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.422306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.434257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.434281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.446414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.446438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.458064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.458088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.469581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.469605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.481341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.481366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.492479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.492503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.508532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.508571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.519185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.519223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.536278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.536304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.552179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.552206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.568725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.568753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.584000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.584027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.594341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.594366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.607066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.607092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.618345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.618369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.063 [2024-11-17 18:59:28.629321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.063 [2024-11-17 18:59:28.629361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.645154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.645194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.655168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.655207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.672363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.672389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.688757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.688784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.699179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.699204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.716328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.716354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.730535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.730561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.740650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.740696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 10895.00 IOPS, 85.12 MiB/s [2024-11-17T17:59:28.898Z] [2024-11-17 18:59:28.755404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.755445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.774770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.774796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.785496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.785535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.798301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.798340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.810067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.810093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.821681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.821708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.833323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.833362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.849386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.849411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.860108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.860133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.875356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.875381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.322 [2024-11-17 18:59:28.885964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.322 [2024-11-17 18:59:28.886013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:28.898572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:28.898597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:28.909555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:28.909581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:28.922495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:28.922521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:28.932748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:28.932775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:28.948800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:28.948841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:28.963802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:28.963829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:28.973786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:28.973812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:28.986414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:28.986438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:28.997594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:28.997618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:29.008985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:29.009011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:29.023374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:29.023400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:29.033288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:29.033313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:29.045975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:29.046000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.580 [2024-11-17 18:59:29.057432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.580 [2024-11-17 18:59:29.057457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.581 [2024-11-17 18:59:29.068332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.581 [2024-11-17 18:59:29.068356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.581 [2024-11-17 18:59:29.082487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.581 [2024-11-17 18:59:29.082522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.581 [2024-11-17 18:59:29.092518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.581 [2024-11-17 18:59:29.092543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.581 [2024-11-17 18:59:29.108052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.581 [2024-11-17 18:59:29.108077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.581 [2024-11-17 18:59:29.118761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.581 [2024-11-17 18:59:29.118797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.581 [2024-11-17 18:59:29.131419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.581 [2024-11-17 18:59:29.131444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.581 [2024-11-17 18:59:29.146175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.581 [2024-11-17 18:59:29.146202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.156261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.156295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.173266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.173305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.184195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.184221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.198515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.198540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.208931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.208972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.221662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.221712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.234302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.234340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.245749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.245774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.257221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.257260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.269073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.269097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.280455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.280479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.295378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.295405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.305425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.305452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.318072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.318097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.329907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.329937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.341964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.342014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.353414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.353463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.365441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.365469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.377621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.377660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.390016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.390065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.401937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.401987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:42.839 [2024-11-17 18:59:29.413825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:42.839 [2024-11-17 18:59:29.413854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.425534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.425575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.437110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.437135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.448776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.448805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.460227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.460251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.473891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.473921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.484074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.484098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.496685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.496713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.511359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.511385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.521383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.521407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.534303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.534328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.546083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.546108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.557931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.557974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.569566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.569605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.581068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.581100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.592815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.592840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.608778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.608806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.619070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.619109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.636055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.636079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.646320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.646347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.097 [2024-11-17 18:59:29.659127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.097 [2024-11-17 18:59:29.659153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.677052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.677078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.687299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.687326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.702786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.702812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.713689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.713729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.726006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.726031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.737609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.737635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.749249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.749273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 10916.50 IOPS, 85.29 MiB/s [2024-11-17T17:59:29.933Z] [2024-11-17 18:59:29.760821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.760848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.772431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.772457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.787326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.787352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.797314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.797339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.810530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.810554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.822037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.822063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.833893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.833931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.845263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.845287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.856131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.856157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.872738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.872765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.888763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.888790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.899212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.899237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.911528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.911553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.357 [2024-11-17 18:59:29.926925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.357 [2024-11-17 18:59:29.926953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:29.936881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:29.936909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:29.952580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:29.952606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:29.964383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:29.964408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:29.980317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:29.980342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:29.996983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:29.997009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.007107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.007135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.022583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.022613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.032712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.032742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.048779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.048815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.059443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.059470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.074929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.074971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.085195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.085220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.101037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.101063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.111246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.111271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.127775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.127800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.138245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.138284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.151447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.151472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.168792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.168817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.618 [2024-11-17 18:59:30.179280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.618 [2024-11-17 18:59:30.179305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.195058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.195089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.205131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.205155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.218060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.218085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.229574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.229599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.241266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.241291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.252745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.252772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.264904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.264930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.281741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.281769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.291599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.291624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.304277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.304304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.321280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.321304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.331409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.331433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.344392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.344417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.360370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.360396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.370573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.370597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.383736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.383763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.399866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.399894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.410433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.410458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.422758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.422786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.434523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.434547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:43.878 [2024-11-17 18:59:30.446506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:43.878 [2024-11-17 18:59:30.446545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.458036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.458071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.469695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.469732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.481180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.481205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.492810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.492838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.504210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.504235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.521786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.521812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.532214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.532240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.547791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.547839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.563716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.563753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.574219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.574267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.587141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.587172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.603629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.603670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.613988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.614015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.626589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.626615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.638261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.638286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.649795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.649823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.661017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.661057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.673090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.673115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.684877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.684905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.139 [2024-11-17 18:59:30.700391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.139 [2024-11-17 18:59:30.700417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.716810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.716853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.731790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.731818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.741809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.741836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 10918.00 IOPS, 85.30 MiB/s [2024-11-17T17:59:30.975Z] [2024-11-17 18:59:30.753864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.753891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.764426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.764452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.778572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.778597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.788701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.788754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.800868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.800895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.814980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.815008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.825332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.825357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.837788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.837815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.849392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.849417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.860645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.860696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.877057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.877083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.887098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.887139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.899792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.899825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.917050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.917077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.927519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.927544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.941836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.941864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.952131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.952156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.399 [2024-11-17 18:59:30.964531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.399 [2024-11-17 18:59:30.964571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:30.979116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:30.979144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:30.988852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:30.988879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.004560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.004585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.017528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.017559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.027595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.027643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.039816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.039843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.057020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.057061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.067331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.067356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.083120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.083145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.094016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.094057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.105393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.105419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.116039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.116064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.128353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.128378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.144947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.144989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.155122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.155147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.172338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.172365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.186628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.186655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.196982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.197007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.209756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.209798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.221132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.221157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.660 [2024-11-17 18:59:31.232812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.660 [2024-11-17 18:59:31.232839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.246744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.246774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.256705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.256732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.271858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.271885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.282111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.282136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.294708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.294735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.305628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.305667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.315885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.315913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.328337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.328362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.344161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.344202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.354792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.354833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.367306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.367331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.378890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.378930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.390491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.390516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.402057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.402097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.413577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.413602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.424470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.424495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.439710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.439738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.449741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.449768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.462191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.462217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.473246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.473271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:44.921 [2024-11-17 18:59:31.485168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:44.921 [2024-11-17 18:59:31.485196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.180 [2024-11-17 18:59:31.498469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.180 [2024-11-17 18:59:31.498502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.180 [2024-11-17 18:59:31.509013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.180 [2024-11-17 18:59:31.509053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.180 [2024-11-17 18:59:31.523718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.180 [2024-11-17 18:59:31.523758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.180 [2024-11-17 18:59:31.533826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.180 [2024-11-17 18:59:31.533854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.180 [2024-11-17 18:59:31.545712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.180 [2024-11-17 18:59:31.545740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.180 [2024-11-17 18:59:31.557843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.180 [2024-11-17 18:59:31.557869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.180 [2024-11-17 18:59:31.569420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.180 [2024-11-17 18:59:31.569445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.180 [2024-11-17 18:59:31.581069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.581095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.592843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.592870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.607967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.607995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.618342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.618368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.631491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.631517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.648786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.648829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.663576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.663617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.673480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.673506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.686404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.686429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.698357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.698381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.710395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.710420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.722266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.722291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.734062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.734088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.181 [2024-11-17 18:59:31.745744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.181 [2024-11-17 18:59:31.745771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 10944.00 IOPS, 85.50 MiB/s [2024-11-17T17:59:32.016Z] [2024-11-17 18:59:31.757715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.757756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.769056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.769080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.780191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.780229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.796013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.796052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.812554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.812594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.823009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.823048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.840249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.840273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.856839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.856881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.867311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.867337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.879717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.879746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.895029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.895071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.905187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.905213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.920991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.921017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.931326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.931351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.948589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.948613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.959704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.959731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.974581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.974613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.984863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.984889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:31.997327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:31.997350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.440 [2024-11-17 18:59:32.011154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.440 [2024-11-17 18:59:32.011181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.021164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.021189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.035944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.035995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.046006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.046031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.059196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.059221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.075807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.075833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.086220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.086247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.098850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.098878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.111005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.111030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.122563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.122589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.134273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.134298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.145708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.145750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.157441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.157468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.169088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.169112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.180460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.180485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.195849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.195876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.206284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.206316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.218976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.219000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.229933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.229974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.241735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.241761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.253301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.253325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.700 [2024-11-17 18:59:32.264712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.700 [2024-11-17 18:59:32.264738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.276147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.276174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.292873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.292899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.308892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.308919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.319103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.319128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.336483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.336507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.347286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.347310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.360185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.360210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.374020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.374046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.383967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.383993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.396248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.396272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.412485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.412525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.423129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.423152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.440576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.440600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.453546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.453596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.463842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.960 [2024-11-17 18:59:32.463867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.960 [2024-11-17 18:59:32.476391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.961 [2024-11-17 18:59:32.476415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.961 [2024-11-17 18:59:32.493029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.961 [2024-11-17 18:59:32.493053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.961 [2024-11-17 18:59:32.503594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.961 [2024-11-17 18:59:32.503620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.961 [2024-11-17 18:59:32.520079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.961 [2024-11-17 18:59:32.520118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.218 [2024-11-17 18:59:32.536529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.218 [2024-11-17 18:59:32.536571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.218 [2024-11-17 18:59:32.546914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.218 [2024-11-17 18:59:32.546941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.218 [2024-11-17 18:59:32.559439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.218 [2024-11-17 18:59:32.559463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.218 [2024-11-17 18:59:32.574772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.218 [2024-11-17 18:59:32.574814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.218 [2024-11-17 18:59:32.585068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.585094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.599236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.599262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.609614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.609654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.622402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.622426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.634078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.634102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.645206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.645230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.656133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.656157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.671018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.671042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.680774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.680816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.695164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.695197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.705303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.705328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.717942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.717982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.728645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.728693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.743215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.743255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.753739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.753766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 10949.80 IOPS, 85.55 MiB/s [2024-11-17T17:59:32.795Z] [2024-11-17 18:59:32.764478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.764506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 00:38:46.219 Latency(us) 00:38:46.219 [2024-11-17T17:59:32.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:46.219 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:38:46.219 Nvme1n1 : 5.01 10953.90 85.58 0.00 0.00 11669.67 2936.98 19320.98 00:38:46.219 [2024-11-17T17:59:32.795Z] =================================================================================================================== 00:38:46.219 [2024-11-17T17:59:32.795Z] Total : 10953.90 85.58 0.00 0.00 11669.67 2936.98 19320.98 00:38:46.219 [2024-11-17 18:59:32.770015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.770042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.778001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.778042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.786041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.786080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.219 [2024-11-17 18:59:32.794078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.219 [2024-11-17 18:59:32.794128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.802072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.802120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.810067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.810115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.818056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.818102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.826071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.826120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.834064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.834111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.842067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.842116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.850069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.850117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.858067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.858116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.866078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.866129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.874069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.874115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.882071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.882119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.890067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.890113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.898066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.898101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.906025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.906062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.914073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.914119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.922071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.922116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.930064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.930101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.938003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.938036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.946002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.946020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 [2024-11-17 18:59:32.954034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.478 [2024-11-17 18:59:32.954054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (931128) - No such process 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 931128 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:46.478 delay0 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.478 18:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:38:46.735 [2024-11-17 18:59:33.072589] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:54.857 Initializing NVMe Controllers 00:38:54.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:54.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:54.857 Initialization complete. Launching workers. 00:38:54.857 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 229, failed: 26693 00:38:54.857 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26780, failed to submit 142 00:38:54.857 success 26711, unsuccessful 69, failed 0 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:54.857 rmmod nvme_tcp 00:38:54.857 rmmod nvme_fabrics 00:38:54.857 rmmod nvme_keyring 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 929805 ']' 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 929805 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 929805 ']' 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 929805 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 929805 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 929805' 00:38:54.857 killing process with pid 929805 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 929805 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 929805 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:54.857 18:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.236 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:56.236 00:38:56.236 real 0m29.089s 00:38:56.236 user 0m41.363s 00:38:56.236 sys 0m10.221s 00:38:56.236 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:56.236 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:56.236 ************************************ 00:38:56.236 END TEST nvmf_zcopy 00:38:56.236 ************************************ 00:38:56.236 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:56.236 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:56.236 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:56.236 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:56.236 ************************************ 00:38:56.236 START TEST nvmf_nmic 00:38:56.236 ************************************ 00:38:56.236 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:56.495 * Looking for test storage... 00:38:56.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:38:56.495 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.496 --rc genhtml_branch_coverage=1 00:38:56.496 --rc genhtml_function_coverage=1 00:38:56.496 --rc genhtml_legend=1 00:38:56.496 --rc geninfo_all_blocks=1 00:38:56.496 --rc geninfo_unexecuted_blocks=1 00:38:56.496 00:38:56.496 ' 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.496 --rc genhtml_branch_coverage=1 00:38:56.496 --rc genhtml_function_coverage=1 00:38:56.496 --rc genhtml_legend=1 00:38:56.496 --rc geninfo_all_blocks=1 00:38:56.496 --rc geninfo_unexecuted_blocks=1 00:38:56.496 00:38:56.496 ' 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.496 --rc genhtml_branch_coverage=1 00:38:56.496 --rc genhtml_function_coverage=1 00:38:56.496 --rc genhtml_legend=1 00:38:56.496 --rc geninfo_all_blocks=1 00:38:56.496 --rc geninfo_unexecuted_blocks=1 00:38:56.496 00:38:56.496 ' 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.496 --rc genhtml_branch_coverage=1 00:38:56.496 --rc genhtml_function_coverage=1 00:38:56.496 --rc genhtml_legend=1 00:38:56.496 --rc geninfo_all_blocks=1 00:38:56.496 --rc geninfo_unexecuted_blocks=1 00:38:56.496 00:38:56.496 ' 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:56.496 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:38:56.497 18:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.033 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:59.033 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:38:59.033 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:59.033 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:59.033 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:59.033 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:59.033 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:59.033 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:38:59.033 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:59.034 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:59.034 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:59.034 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:59.034 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:59.034 18:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:59.034 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:59.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:38:59.035 00:38:59.035 --- 10.0.0.2 ping statistics --- 00:38:59.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.035 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:59.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:38:59.035 00:38:59.035 --- 10.0.0.1 ping statistics --- 00:38:59.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.035 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=934516 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 934516 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 934516 ']' 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.035 [2024-11-17 18:59:45.207621] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:59.035 [2024-11-17 18:59:45.208670] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:38:59.035 [2024-11-17 18:59:45.208732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.035 [2024-11-17 18:59:45.281885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:59.035 [2024-11-17 18:59:45.328636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:59.035 [2024-11-17 18:59:45.328714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:59.035 [2024-11-17 18:59:45.328733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:59.035 [2024-11-17 18:59:45.328746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:59.035 [2024-11-17 18:59:45.328771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:59.035 [2024-11-17 18:59:45.330377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.035 [2024-11-17 18:59:45.330441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:59.035 [2024-11-17 18:59:45.330463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:59.035 [2024-11-17 18:59:45.330467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.035 [2024-11-17 18:59:45.413041] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:59.035 [2024-11-17 18:59:45.413273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:59.035 [2024-11-17 18:59:45.413526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:59.035 [2024-11-17 18:59:45.414209] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:59.035 [2024-11-17 18:59:45.414426] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.035 [2024-11-17 18:59:45.463141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.035 Malloc0 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.035 [2024-11-17 18:59:45.535421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:38:59.035 test case1: single bdev can't be used in multiple subsystems 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.035 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.036 [2024-11-17 18:59:45.559089] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:38:59.036 [2024-11-17 18:59:45.559126] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:38:59.036 [2024-11-17 18:59:45.559144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.036 request: 00:38:59.036 { 00:38:59.036 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:38:59.036 "namespace": { 00:38:59.036 "bdev_name": "Malloc0", 00:38:59.036 "no_auto_visible": false 00:38:59.036 }, 00:38:59.036 "method": "nvmf_subsystem_add_ns", 00:38:59.036 "req_id": 1 00:38:59.036 } 00:38:59.036 Got JSON-RPC error response 00:38:59.036 response: 00:38:59.036 { 00:38:59.036 "code": -32602, 00:38:59.036 "message": "Invalid parameters" 00:38:59.036 } 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:38:59.036 Adding namespace failed - expected result. 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:38:59.036 test case2: host connect to nvmf target in multiple paths 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:59.036 [2024-11-17 18:59:45.567186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.036 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:59.294 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:38:59.552 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:38:59.552 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:38:59.552 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:38:59.552 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:38:59.552 18:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:01.455 18:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:01.455 18:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:01.455 18:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:01.455 18:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:01.455 18:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:01.455 18:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:01.455 18:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:01.455 [global] 00:39:01.455 thread=1 00:39:01.455 invalidate=1 00:39:01.455 rw=write 00:39:01.455 time_based=1 00:39:01.455 runtime=1 00:39:01.455 ioengine=libaio 00:39:01.455 direct=1 00:39:01.455 bs=4096 00:39:01.455 iodepth=1 00:39:01.455 norandommap=0 00:39:01.455 numjobs=1 00:39:01.455 00:39:01.455 verify_dump=1 00:39:01.455 verify_backlog=512 00:39:01.455 verify_state_save=0 00:39:01.455 do_verify=1 00:39:01.455 verify=crc32c-intel 00:39:01.455 [job0] 00:39:01.455 filename=/dev/nvme0n1 00:39:01.455 Could not set queue depth (nvme0n1) 00:39:01.713 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:01.713 fio-3.35 00:39:01.713 Starting 1 thread 00:39:03.090 00:39:03.090 job0: (groupid=0, jobs=1): err= 0: pid=935004: Sun Nov 17 18:59:49 2024 00:39:03.090 read: IOPS=2386, BW=9546KiB/s (9776kB/s)(9556KiB/1001msec) 00:39:03.090 slat (nsec): min=4096, max=82374, avg=9240.02, stdev=6015.92 00:39:03.090 clat (usec): min=186, max=360, avg=214.58, stdev=19.70 00:39:03.090 lat (usec): min=191, max=394, avg=223.82, stdev=23.55 00:39:03.090 clat percentiles (usec): 00:39:03.090 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 200], 20.00th=[ 204], 00:39:03.090 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 212], 00:39:03.090 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 251], 00:39:03.090 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 338], 99.95th=[ 343], 00:39:03.090 | 99.99th=[ 363] 00:39:03.090 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:39:03.090 slat (usec): min=5, max=24897, avg=21.52, stdev=491.88 00:39:03.090 clat (usec): min=121, max=381, avg=153.62, stdev=22.63 00:39:03.090 lat (usec): min=128, max=25103, avg=175.14, stdev=493.56 00:39:03.090 clat percentiles (usec): 00:39:03.090 | 1.00th=[ 127], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:39:03.090 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:39:03.090 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 190], 95.00th=[ 194], 00:39:03.090 | 99.00th=[ 233], 99.50th=[ 277], 99.90th=[ 322], 99.95th=[ 326], 00:39:03.090 | 99.99th=[ 383] 00:39:03.090 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:39:03.090 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:39:03.090 lat (usec) : 250=97.03%, 500=2.97% 00:39:03.090 cpu : usr=1.70%, sys=6.30%, ctx=4952, majf=0, minf=1 00:39:03.090 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.091 issued rwts: total=2389,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.091 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:03.091 00:39:03.091 Run status group 0 (all jobs): 00:39:03.091 READ: bw=9546KiB/s (9776kB/s), 9546KiB/s-9546KiB/s (9776kB/s-9776kB/s), io=9556KiB (9785kB), run=1001-1001msec 00:39:03.091 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:39:03.091 00:39:03.091 Disk stats (read/write): 00:39:03.091 nvme0n1: ios=2073/2476, merge=0/0, ticks=1378/372, in_queue=1750, util=98.20% 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:03.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:03.091 rmmod nvme_tcp 00:39:03.091 rmmod nvme_fabrics 00:39:03.091 rmmod nvme_keyring 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 934516 ']' 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 934516 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 934516 ']' 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 934516 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 934516 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 934516' 00:39:03.091 killing process with pid 934516 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 934516 00:39:03.091 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 934516 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:03.351 18:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:05.258 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:05.258 00:39:05.258 real 0m9.012s 00:39:05.258 user 0m16.620s 00:39:05.258 sys 0m3.386s 00:39:05.258 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:05.258 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:05.258 ************************************ 00:39:05.258 END TEST nvmf_nmic 00:39:05.258 ************************************ 00:39:05.258 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:05.258 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:05.258 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:05.258 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:05.518 ************************************ 00:39:05.518 START TEST nvmf_fio_target 00:39:05.518 ************************************ 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:05.518 * Looking for test storage... 00:39:05.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:05.518 18:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:05.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.518 --rc genhtml_branch_coverage=1 00:39:05.518 --rc genhtml_function_coverage=1 00:39:05.518 --rc genhtml_legend=1 00:39:05.518 --rc geninfo_all_blocks=1 00:39:05.518 --rc geninfo_unexecuted_blocks=1 00:39:05.518 00:39:05.518 ' 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:05.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.518 --rc genhtml_branch_coverage=1 00:39:05.518 --rc genhtml_function_coverage=1 00:39:05.518 --rc genhtml_legend=1 00:39:05.518 --rc geninfo_all_blocks=1 00:39:05.518 --rc geninfo_unexecuted_blocks=1 00:39:05.518 00:39:05.518 ' 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:05.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.518 --rc genhtml_branch_coverage=1 00:39:05.518 --rc genhtml_function_coverage=1 00:39:05.518 --rc genhtml_legend=1 00:39:05.518 --rc geninfo_all_blocks=1 00:39:05.518 --rc geninfo_unexecuted_blocks=1 00:39:05.518 00:39:05.518 ' 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:05.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:05.518 --rc genhtml_branch_coverage=1 00:39:05.518 --rc genhtml_function_coverage=1 00:39:05.518 --rc genhtml_legend=1 00:39:05.518 --rc geninfo_all_blocks=1 00:39:05.518 --rc geninfo_unexecuted_blocks=1 00:39:05.518 00:39:05.518 ' 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:05.518 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:05.519 18:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:08.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:08.126 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:08.126 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:08.127 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:08.127 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:08.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:08.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:39:08.127 00:39:08.127 --- 10.0.0.2 ping statistics --- 00:39:08.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:08.127 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:08.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:08.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:39:08.127 00:39:08.127 --- 10.0.0.1 ping statistics --- 00:39:08.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:08.127 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=937107 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 937107 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 937107 ']' 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.127 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:08.127 [2024-11-17 18:59:54.295248] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:08.127 [2024-11-17 18:59:54.296284] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:08.127 [2024-11-17 18:59:54.296342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:08.127 [2024-11-17 18:59:54.368962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:08.127 [2024-11-17 18:59:54.414769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:08.127 [2024-11-17 18:59:54.414822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:08.127 [2024-11-17 18:59:54.414851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:08.127 [2024-11-17 18:59:54.414863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:08.127 [2024-11-17 18:59:54.414873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:08.127 [2024-11-17 18:59:54.416442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:08.127 [2024-11-17 18:59:54.416472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:08.127 [2024-11-17 18:59:54.416529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:08.127 [2024-11-17 18:59:54.416532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.128 [2024-11-17 18:59:54.498830] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:08.128 [2024-11-17 18:59:54.499091] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:08.128 [2024-11-17 18:59:54.499339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:08.128 [2024-11-17 18:59:54.499991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:08.128 [2024-11-17 18:59:54.500228] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:08.128 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.128 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:08.128 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:08.128 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.128 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:08.128 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:08.128 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:08.386 [2024-11-17 18:59:54.809334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:08.386 18:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:08.645 18:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:08.645 18:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:08.904 18:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:08.904 18:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:09.474 18:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:09.474 18:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:09.474 18:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:09.474 18:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:09.734 18:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:10.301 18:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:10.301 18:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:10.301 18:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:10.301 18:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:10.867 18:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:10.867 18:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:10.867 18:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:11.125 18:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:11.125 18:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:11.384 18:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:11.384 18:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:11.951 18:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:12.209 [2024-11-17 18:59:58.553426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.209 18:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:12.467 18:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:12.725 18:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:12.725 18:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:12.725 18:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:12.725 18:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:12.725 18:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:12.725 18:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:12.725 18:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:39:15.254 19:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:15.254 19:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:15.254 19:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:15.254 19:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:39:15.254 19:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:15.254 19:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:39:15.254 19:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:15.254 [global] 00:39:15.254 thread=1 00:39:15.254 invalidate=1 00:39:15.254 rw=write 00:39:15.254 time_based=1 00:39:15.254 runtime=1 00:39:15.254 ioengine=libaio 00:39:15.254 direct=1 00:39:15.254 bs=4096 00:39:15.254 iodepth=1 00:39:15.254 norandommap=0 00:39:15.254 numjobs=1 00:39:15.254 00:39:15.254 verify_dump=1 00:39:15.254 verify_backlog=512 00:39:15.254 verify_state_save=0 00:39:15.254 do_verify=1 00:39:15.254 verify=crc32c-intel 00:39:15.254 [job0] 00:39:15.254 filename=/dev/nvme0n1 00:39:15.254 [job1] 00:39:15.254 filename=/dev/nvme0n2 00:39:15.254 [job2] 00:39:15.254 filename=/dev/nvme0n3 00:39:15.254 [job3] 00:39:15.254 filename=/dev/nvme0n4 00:39:15.254 Could not set queue depth (nvme0n1) 00:39:15.254 Could not set queue depth (nvme0n2) 00:39:15.254 Could not set queue depth (nvme0n3) 00:39:15.254 Could not set queue depth (nvme0n4) 00:39:15.254 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:15.254 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:15.254 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:15.254 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:15.254 fio-3.35 00:39:15.254 Starting 4 threads 00:39:16.191 00:39:16.191 job0: (groupid=0, jobs=1): err= 0: pid=938141: Sun Nov 17 19:00:02 2024 00:39:16.191 read: IOPS=640, BW=2564KiB/s (2625kB/s)(2592KiB/1011msec) 00:39:16.191 slat (nsec): min=5377, max=38989, avg=8955.98, stdev=5562.57 00:39:16.191 clat (usec): min=185, max=41421, avg=1169.85, stdev=5884.74 00:39:16.191 lat (usec): min=192, max=41437, avg=1178.80, stdev=5885.73 00:39:16.191 clat percentiles (usec): 00:39:16.191 | 1.00th=[ 200], 5.00th=[ 245], 10.00th=[ 265], 20.00th=[ 269], 00:39:16.191 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 293], 00:39:16.192 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 367], 95.00th=[ 420], 00:39:16.192 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:39:16.192 | 99.99th=[41681] 00:39:16.192 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:39:16.192 slat (nsec): min=6972, max=33028, avg=8412.27, stdev=2243.24 00:39:16.192 clat (usec): min=145, max=391, avg=228.85, stdev=25.73 00:39:16.192 lat (usec): min=152, max=400, avg=237.26, stdev=26.11 00:39:16.192 clat percentiles (usec): 00:39:16.192 | 1.00th=[ 176], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 208], 00:39:16.192 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 231], 00:39:16.192 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 273], 00:39:16.192 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 392], 99.95th=[ 392], 00:39:16.192 | 99.99th=[ 392] 00:39:16.192 bw ( KiB/s): min= 8192, max= 8192, per=49.40%, avg=8192.00, stdev= 0.00, samples=1 00:39:16.192 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:16.192 lat (usec) : 250=47.79%, 500=51.20%, 750=0.18% 00:39:16.192 lat (msec) : 50=0.84% 00:39:16.192 cpu : usr=0.79%, sys=2.28%, ctx=1672, majf=0, minf=1 00:39:16.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.192 issued rwts: total=648,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:16.192 job1: (groupid=0, jobs=1): err= 0: pid=938163: Sun Nov 17 19:00:02 2024 00:39:16.192 read: IOPS=26, BW=106KiB/s (109kB/s)(108KiB/1016msec) 00:39:16.192 slat (nsec): min=7210, max=35143, avg=19515.96, stdev=9641.33 00:39:16.192 clat (usec): min=242, max=41022, avg=34186.87, stdev=14862.73 00:39:16.192 lat (usec): min=260, max=41040, avg=34206.39, stdev=14861.97 00:39:16.192 clat percentiles (usec): 00:39:16.192 | 1.00th=[ 243], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[40633], 00:39:16.192 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:16.192 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:16.192 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:16.192 | 99.99th=[41157] 00:39:16.192 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:39:16.192 slat (nsec): min=7143, max=33002, avg=8721.61, stdev=2220.77 00:39:16.192 clat (usec): min=151, max=706, avg=169.43, stdev=27.19 00:39:16.192 lat (usec): min=159, max=714, avg=178.15, stdev=27.35 00:39:16.192 clat percentiles (usec): 00:39:16.192 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 157], 20.00th=[ 159], 00:39:16.192 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:39:16.192 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:39:16.192 | 99.00th=[ 215], 99.50th=[ 247], 99.90th=[ 709], 99.95th=[ 709], 00:39:16.192 | 99.99th=[ 709] 00:39:16.192 bw ( KiB/s): min= 4096, max= 4096, per=24.70%, avg=4096.00, stdev= 0.00, samples=1 00:39:16.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:16.192 lat (usec) : 250=94.81%, 500=0.74%, 750=0.19% 00:39:16.192 lat (msec) : 50=4.27% 00:39:16.192 cpu : usr=0.20%, sys=0.69%, ctx=539, majf=0, minf=2 00:39:16.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.192 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:16.192 job2: (groupid=0, jobs=1): err= 0: pid=938222: Sun Nov 17 19:00:02 2024 00:39:16.192 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:39:16.192 slat (nsec): min=4812, max=32771, avg=8195.46, stdev=3640.78 00:39:16.192 clat (usec): min=192, max=589, avg=247.94, stdev=53.58 00:39:16.192 lat (usec): min=199, max=605, avg=256.14, stdev=55.38 00:39:16.192 clat percentiles (usec): 00:39:16.192 | 1.00th=[ 196], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:39:16.192 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 245], 00:39:16.192 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 326], 00:39:16.192 | 99.00th=[ 429], 99.50th=[ 486], 99.90th=[ 578], 99.95th=[ 586], 00:39:16.192 | 99.99th=[ 586] 00:39:16.192 write: IOPS=2161, BW=8647KiB/s (8855kB/s)(8656KiB/1001msec); 0 zone resets 00:39:16.192 slat (nsec): min=6313, max=50633, avg=9477.78, stdev=3163.67 00:39:16.192 clat (usec): min=141, max=421, avg=205.75, stdev=36.50 00:39:16.192 lat (usec): min=149, max=429, avg=215.23, stdev=37.27 00:39:16.192 clat percentiles (usec): 00:39:16.192 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 165], 00:39:16.192 | 30.00th=[ 188], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 210], 00:39:16.192 | 70.00th=[ 221], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 265], 00:39:16.192 | 99.00th=[ 310], 99.50th=[ 314], 99.90th=[ 383], 99.95th=[ 404], 00:39:16.192 | 99.99th=[ 420] 00:39:16.192 bw ( KiB/s): min= 8192, max= 8192, per=49.40%, avg=8192.00, stdev= 0.00, samples=1 00:39:16.192 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:16.192 lat (usec) : 250=76.61%, 500=23.22%, 750=0.17% 00:39:16.192 cpu : usr=1.30%, sys=4.30%, ctx=4214, majf=0, minf=1 00:39:16.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.192 issued rwts: total=2048,2164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:16.192 job3: (groupid=0, jobs=1): err= 0: pid=938239: Sun Nov 17 19:00:02 2024 00:39:16.192 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:39:16.192 slat (nsec): min=7119, max=36957, avg=20248.55, stdev=10008.64 00:39:16.192 clat (usec): min=40782, max=41004, avg=40962.71, stdev=45.33 00:39:16.192 lat (usec): min=40789, max=41021, avg=40982.96, stdev=45.82 00:39:16.192 clat percentiles (usec): 00:39:16.192 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:16.192 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:16.192 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:16.192 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:16.192 | 99.99th=[41157] 00:39:16.192 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:39:16.192 slat (nsec): min=7869, max=23890, avg=9351.31, stdev=1892.49 00:39:16.192 clat (usec): min=168, max=801, avg=196.01, stdev=31.87 00:39:16.192 lat (usec): min=176, max=811, avg=205.36, stdev=32.08 00:39:16.192 clat percentiles (usec): 00:39:16.192 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:39:16.192 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:39:16.192 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 225], 00:39:16.192 | 99.00th=[ 245], 99.50th=[ 265], 99.90th=[ 799], 99.95th=[ 799], 00:39:16.192 | 99.99th=[ 799] 00:39:16.192 bw ( KiB/s): min= 4096, max= 4096, per=24.70%, avg=4096.00, stdev= 0.00, samples=1 00:39:16.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:16.192 lat (usec) : 250=95.13%, 500=0.56%, 1000=0.19% 00:39:16.192 lat (msec) : 50=4.12% 00:39:16.192 cpu : usr=0.20%, sys=0.79%, ctx=535, majf=0, minf=1 00:39:16.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.192 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:16.192 00:39:16.192 Run status group 0 (all jobs): 00:39:16.192 READ: bw=10.6MiB/s (11.1MB/s), 87.2KiB/s-8184KiB/s (89.3kB/s-8380kB/s), io=10.7MiB (11.2MB), run=1001-1016msec 00:39:16.192 WRITE: bw=16.2MiB/s (17.0MB/s), 2016KiB/s-8647KiB/s (2064kB/s-8855kB/s), io=16.5MiB (17.3MB), run=1001-1016msec 00:39:16.192 00:39:16.192 Disk stats (read/write): 00:39:16.192 nvme0n1: ios=648/1024, merge=0/0, ticks=804/230, in_queue=1034, util=90.48% 00:39:16.192 nvme0n2: ios=20/512, merge=0/0, ticks=678/84, in_queue=762, util=83.20% 00:39:16.192 nvme0n3: ios=1567/1964, merge=0/0, ticks=902/397, in_queue=1299, util=98.59% 00:39:16.192 nvme0n4: ios=39/512, merge=0/0, ticks=1599/93, in_queue=1692, util=97.57% 00:39:16.451 19:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:16.451 [global] 00:39:16.451 thread=1 00:39:16.451 invalidate=1 00:39:16.451 rw=randwrite 00:39:16.451 time_based=1 00:39:16.451 runtime=1 00:39:16.451 ioengine=libaio 00:39:16.451 direct=1 00:39:16.451 bs=4096 00:39:16.451 iodepth=1 00:39:16.451 norandommap=0 00:39:16.451 numjobs=1 00:39:16.451 00:39:16.451 verify_dump=1 00:39:16.451 verify_backlog=512 00:39:16.451 verify_state_save=0 00:39:16.451 do_verify=1 00:39:16.451 verify=crc32c-intel 00:39:16.451 [job0] 00:39:16.451 filename=/dev/nvme0n1 00:39:16.451 [job1] 00:39:16.451 filename=/dev/nvme0n2 00:39:16.451 [job2] 00:39:16.451 filename=/dev/nvme0n3 00:39:16.451 [job3] 00:39:16.451 filename=/dev/nvme0n4 00:39:16.451 Could not set queue depth (nvme0n1) 00:39:16.451 Could not set queue depth (nvme0n2) 00:39:16.451 Could not set queue depth (nvme0n3) 00:39:16.451 Could not set queue depth (nvme0n4) 00:39:16.451 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:16.451 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:16.451 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:16.451 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:16.451 fio-3.35 00:39:16.451 Starting 4 threads 00:39:17.827 00:39:17.827 job0: (groupid=0, jobs=1): err= 0: pid=938501: Sun Nov 17 19:00:04 2024 00:39:17.827 read: IOPS=2435, BW=9740KiB/s (9974kB/s)(9740KiB/1000msec) 00:39:17.827 slat (nsec): min=4599, max=24804, avg=6255.97, stdev=2054.84 00:39:17.827 clat (usec): min=184, max=564, avg=225.84, stdev=31.01 00:39:17.827 lat (usec): min=194, max=578, avg=232.10, stdev=31.78 00:39:17.827 clat percentiles (usec): 00:39:17.827 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:39:17.827 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:39:17.827 | 70.00th=[ 231], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 265], 00:39:17.827 | 99.00th=[ 322], 99.50th=[ 429], 99.90th=[ 537], 99.95th=[ 537], 00:39:17.827 | 99.99th=[ 562] 00:39:17.827 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:39:17.827 slat (nsec): min=6256, max=34689, avg=8096.38, stdev=2161.81 00:39:17.827 clat (usec): min=123, max=681, avg=157.97, stdev=30.27 00:39:17.827 lat (usec): min=131, max=692, avg=166.07, stdev=31.04 00:39:17.827 clat percentiles (usec): 00:39:17.827 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:39:17.827 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:39:17.827 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 194], 95.00th=[ 212], 00:39:17.827 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 510], 99.95th=[ 529], 00:39:17.827 | 99.99th=[ 685] 00:39:17.827 bw ( KiB/s): min=12240, max=12240, per=61.92%, avg=12240.00, stdev= 0.00, samples=1 00:39:17.827 iops : min= 3060, max= 3060, avg=3060.00, stdev= 0.00, samples=1 00:39:17.827 lat (usec) : 250=91.11%, 500=8.73%, 750=0.16% 00:39:17.827 cpu : usr=2.00%, sys=3.70%, ctx=4997, majf=0, minf=1 00:39:17.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:17.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.827 issued rwts: total=2435,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:17.827 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:17.827 job1: (groupid=0, jobs=1): err= 0: pid=938502: Sun Nov 17 19:00:04 2024 00:39:17.827 read: IOPS=1014, BW=4059KiB/s (4156kB/s)(4140KiB/1020msec) 00:39:17.827 slat (nsec): min=4186, max=23044, avg=5556.38, stdev=1795.91 00:39:17.827 clat (usec): min=191, max=41299, avg=686.03, stdev=4181.63 00:39:17.827 lat (usec): min=200, max=41305, avg=691.59, stdev=4182.43 00:39:17.827 clat percentiles (usec): 00:39:17.827 | 1.00th=[ 208], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 245], 00:39:17.827 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:39:17.827 | 70.00th=[ 255], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:39:17.827 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:17.827 | 99.99th=[41157] 00:39:17.827 write: IOPS=1505, BW=6024KiB/s (6168kB/s)(6144KiB/1020msec); 0 zone resets 00:39:17.827 slat (nsec): min=5467, max=58379, avg=8821.38, stdev=5392.29 00:39:17.827 clat (usec): min=129, max=657, avg=185.61, stdev=53.85 00:39:17.827 lat (usec): min=135, max=673, avg=194.43, stdev=56.79 00:39:17.827 clat percentiles (usec): 00:39:17.827 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:39:17.827 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 184], 00:39:17.827 | 70.00th=[ 215], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 273], 00:39:17.827 | 99.00th=[ 371], 99.50th=[ 396], 99.90th=[ 586], 99.95th=[ 660], 00:39:17.827 | 99.99th=[ 660] 00:39:17.827 bw ( KiB/s): min= 3120, max= 9168, per=31.08%, avg=6144.00, stdev=4276.58, samples=2 00:39:17.827 iops : min= 780, max= 2292, avg=1536.00, stdev=1069.15, samples=2 00:39:17.827 lat (usec) : 250=73.05%, 500=26.22%, 750=0.31% 00:39:17.827 lat (msec) : 50=0.43% 00:39:17.827 cpu : usr=0.98%, sys=1.86%, ctx=2572, majf=0, minf=1 00:39:17.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:17.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.827 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:17.827 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:17.827 job2: (groupid=0, jobs=1): err= 0: pid=938505: Sun Nov 17 19:00:04 2024 00:39:17.827 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:39:17.827 slat (nsec): min=7493, max=15037, avg=13704.23, stdev=1482.95 00:39:17.827 clat (usec): min=40860, max=41253, avg=40993.81, stdev=66.14 00:39:17.827 lat (usec): min=40874, max=41260, avg=41007.51, stdev=64.93 00:39:17.827 clat percentiles (usec): 00:39:17.827 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:17.827 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:17.827 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:17.827 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:17.827 | 99.99th=[41157] 00:39:17.827 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:39:17.827 slat (nsec): min=7781, max=76504, avg=16714.01, stdev=7915.50 00:39:17.827 clat (usec): min=161, max=832, avg=239.44, stdev=48.11 00:39:17.827 lat (usec): min=185, max=853, avg=256.15, stdev=51.08 00:39:17.827 clat percentiles (usec): 00:39:17.828 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 212], 00:39:17.828 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:39:17.828 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 285], 00:39:17.828 | 99.00th=[ 359], 99.50th=[ 619], 99.90th=[ 832], 99.95th=[ 832], 00:39:17.828 | 99.99th=[ 832] 00:39:17.828 bw ( KiB/s): min= 4096, max= 4096, per=20.72%, avg=4096.00, stdev= 0.00, samples=1 00:39:17.828 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:17.828 lat (usec) : 250=68.54%, 500=26.59%, 750=0.56%, 1000=0.19% 00:39:17.828 lat (msec) : 50=4.12% 00:39:17.828 cpu : usr=0.87%, sys=0.77%, ctx=535, majf=0, minf=1 00:39:17.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:17.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.828 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:17.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:17.828 job3: (groupid=0, jobs=1): err= 0: pid=938506: Sun Nov 17 19:00:04 2024 00:39:17.828 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:39:17.828 slat (nsec): min=6578, max=14903, avg=13251.00, stdev=1592.17 00:39:17.828 clat (usec): min=40422, max=41365, avg=40957.37, stdev=170.65 00:39:17.828 lat (usec): min=40429, max=41379, avg=40970.62, stdev=171.71 00:39:17.828 clat percentiles (usec): 00:39:17.828 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:17.828 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:17.828 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:17.828 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:17.828 | 99.99th=[41157] 00:39:17.828 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:39:17.828 slat (nsec): min=6485, max=44863, avg=14454.44, stdev=5697.13 00:39:17.828 clat (usec): min=168, max=387, avg=199.42, stdev=19.84 00:39:17.828 lat (usec): min=176, max=418, avg=213.87, stdev=22.52 00:39:17.828 clat percentiles (usec): 00:39:17.828 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:39:17.828 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:39:17.828 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 231], 00:39:17.828 | 99.00th=[ 247], 99.50th=[ 260], 99.90th=[ 388], 99.95th=[ 388], 00:39:17.828 | 99.99th=[ 388] 00:39:17.828 bw ( KiB/s): min= 4096, max= 4096, per=20.72%, avg=4096.00, stdev= 0.00, samples=1 00:39:17.828 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:17.828 lat (usec) : 250=95.13%, 500=0.75% 00:39:17.828 lat (msec) : 50=4.12% 00:39:17.828 cpu : usr=0.49%, sys=0.49%, ctx=535, majf=0, minf=2 00:39:17.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:17.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:17.828 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:17.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:17.828 00:39:17.828 Run status group 0 (all jobs): 00:39:17.828 READ: bw=13.2MiB/s (13.9MB/s), 84.9KiB/s-9740KiB/s (87.0kB/s-9974kB/s), io=13.7MiB (14.4MB), run=1000-1036msec 00:39:17.828 WRITE: bw=19.3MiB/s (20.2MB/s), 1977KiB/s-9.99MiB/s (2024kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1036msec 00:39:17.828 00:39:17.828 Disk stats (read/write): 00:39:17.828 nvme0n1: ios=1974/2048, merge=0/0, ticks=765/322, in_queue=1087, util=98.50% 00:39:17.828 nvme0n2: ios=1065/1536, merge=0/0, ticks=724/279, in_queue=1003, util=97.74% 00:39:17.828 nvme0n3: ios=74/512, merge=0/0, ticks=1387/120, in_queue=1507, util=98.27% 00:39:17.828 nvme0n4: ios=39/512, merge=0/0, ticks=1599/96, in_queue=1695, util=98.13% 00:39:17.828 19:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:17.828 [global] 00:39:17.828 thread=1 00:39:17.828 invalidate=1 00:39:17.828 rw=write 00:39:17.828 time_based=1 00:39:17.828 runtime=1 00:39:17.828 ioengine=libaio 00:39:17.828 direct=1 00:39:17.828 bs=4096 00:39:17.828 iodepth=128 00:39:17.828 norandommap=0 00:39:17.828 numjobs=1 00:39:17.828 00:39:17.828 verify_dump=1 00:39:17.828 verify_backlog=512 00:39:17.828 verify_state_save=0 00:39:17.828 do_verify=1 00:39:17.828 verify=crc32c-intel 00:39:17.828 [job0] 00:39:17.828 filename=/dev/nvme0n1 00:39:17.828 [job1] 00:39:17.828 filename=/dev/nvme0n2 00:39:17.828 [job2] 00:39:17.828 filename=/dev/nvme0n3 00:39:17.828 [job3] 00:39:17.828 filename=/dev/nvme0n4 00:39:17.828 Could not set queue depth (nvme0n1) 00:39:17.828 Could not set queue depth (nvme0n2) 00:39:17.828 Could not set queue depth (nvme0n3) 00:39:17.828 Could not set queue depth (nvme0n4) 00:39:18.087 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:18.087 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:18.087 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:18.087 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:18.087 fio-3.35 00:39:18.087 Starting 4 threads 00:39:19.466 00:39:19.466 job0: (groupid=0, jobs=1): err= 0: pid=938737: Sun Nov 17 19:00:05 2024 00:39:19.466 read: IOPS=5485, BW=21.4MiB/s (22.5MB/s)(22.3MiB/1042msec) 00:39:19.466 slat (usec): min=2, max=10981, avg=83.65, stdev=531.34 00:39:19.466 clat (usec): min=5179, max=50535, avg=11649.28, stdev=4906.86 00:39:19.466 lat (usec): min=5184, max=50541, avg=11732.93, stdev=4930.29 00:39:19.466 clat percentiles (usec): 00:39:19.466 | 1.00th=[ 6652], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9634], 00:39:19.466 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11076], 00:39:19.466 | 70.00th=[11338], 80.00th=[12387], 90.00th=[15401], 95.00th=[17171], 00:39:19.466 | 99.00th=[44827], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:39:19.466 | 99.99th=[50594] 00:39:19.466 write: IOPS=5896, BW=23.0MiB/s (24.2MB/s)(24.0MiB/1042msec); 0 zone resets 00:39:19.466 slat (usec): min=2, max=8775, avg=73.31, stdev=407.93 00:39:19.466 clat (usec): min=790, max=53558, avg=10687.90, stdev=4142.35 00:39:19.466 lat (usec): min=818, max=53564, avg=10761.20, stdev=4156.67 00:39:19.466 clat percentiles (usec): 00:39:19.466 | 1.00th=[ 2409], 5.00th=[ 6390], 10.00th=[ 8160], 20.00th=[ 9241], 00:39:19.466 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11076], 00:39:19.466 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12911], 95.00th=[13829], 00:39:19.466 | 99.00th=[17171], 99.50th=[51119], 99.90th=[53216], 99.95th=[53740], 00:39:19.466 | 99.99th=[53740] 00:39:19.466 bw ( KiB/s): min=24232, max=24576, per=40.08%, avg=24404.00, stdev=243.24, samples=2 00:39:19.466 iops : min= 6058, max= 6144, avg=6101.00, stdev=60.81, samples=2 00:39:19.466 lat (usec) : 1000=0.17% 00:39:19.467 lat (msec) : 2=0.19%, 4=0.76%, 10=37.63%, 20=59.65%, 50=1.22% 00:39:19.467 lat (msec) : 100=0.37% 00:39:19.467 cpu : usr=5.38%, sys=9.41%, ctx=536, majf=0, minf=2 00:39:19.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:39:19.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:19.467 issued rwts: total=5716,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:19.467 job1: (groupid=0, jobs=1): err= 0: pid=938738: Sun Nov 17 19:00:05 2024 00:39:19.467 read: IOPS=3963, BW=15.5MiB/s (16.2MB/s)(15.6MiB/1006msec) 00:39:19.467 slat (usec): min=2, max=23607, avg=115.53, stdev=808.38 00:39:19.467 clat (usec): min=2603, max=53226, avg=15925.33, stdev=7865.06 00:39:19.467 lat (usec): min=7681, max=53247, avg=16040.86, stdev=7926.39 00:39:19.467 clat percentiles (usec): 00:39:19.467 | 1.00th=[ 7898], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[11076], 00:39:19.467 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13829], 60.00th=[14353], 00:39:19.467 | 70.00th=[15008], 80.00th=[16450], 90.00th=[28967], 95.00th=[34341], 00:39:19.467 | 99.00th=[44827], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:39:19.467 | 99.99th=[53216] 00:39:19.467 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:39:19.467 slat (usec): min=4, max=12651, avg=122.43, stdev=711.90 00:39:19.467 clat (usec): min=7043, max=58888, avg=15526.26, stdev=9151.44 00:39:19.467 lat (usec): min=7792, max=58897, avg=15648.70, stdev=9224.52 00:39:19.467 clat percentiles (usec): 00:39:19.467 | 1.00th=[ 8029], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10552], 00:39:19.467 | 30.00th=[11600], 40.00th=[12780], 50.00th=[13435], 60.00th=[13698], 00:39:19.467 | 70.00th=[14222], 80.00th=[15139], 90.00th=[18220], 95.00th=[41157], 00:39:19.467 | 99.00th=[53740], 99.50th=[55313], 99.90th=[58983], 99.95th=[58983], 00:39:19.467 | 99.99th=[58983] 00:39:19.467 bw ( KiB/s): min=12416, max=20352, per=26.91%, avg=16384.00, stdev=5611.60, samples=2 00:39:19.467 iops : min= 3104, max= 5088, avg=4096.00, stdev=1402.90, samples=2 00:39:19.467 lat (msec) : 4=0.01%, 10=10.57%, 20=77.38%, 50=10.71%, 100=1.32% 00:39:19.467 cpu : usr=4.78%, sys=8.66%, ctx=318, majf=0, minf=1 00:39:19.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:19.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:19.467 issued rwts: total=3987,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:19.467 job2: (groupid=0, jobs=1): err= 0: pid=938739: Sun Nov 17 19:00:05 2024 00:39:19.467 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:39:19.467 slat (usec): min=2, max=28619, avg=154.83, stdev=1114.96 00:39:19.467 clat (usec): min=9055, max=89758, avg=20939.98, stdev=16281.20 00:39:19.467 lat (usec): min=9065, max=89767, avg=21094.80, stdev=16352.16 00:39:19.467 clat percentiles (usec): 00:39:19.467 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[11207], 20.00th=[11863], 00:39:19.467 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13435], 60.00th=[17957], 00:39:19.467 | 70.00th=[20317], 80.00th=[21103], 90.00th=[40109], 95.00th=[62653], 00:39:19.467 | 99.00th=[89654], 99.50th=[89654], 99.90th=[89654], 99.95th=[89654], 00:39:19.467 | 99.99th=[89654] 00:39:19.467 write: IOPS=3320, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1007msec); 0 zone resets 00:39:19.467 slat (usec): min=2, max=16802, avg=150.23, stdev=853.90 00:39:19.467 clat (usec): min=4836, max=79600, avg=18814.64, stdev=13398.13 00:39:19.467 lat (usec): min=8606, max=79620, avg=18964.87, stdev=13491.36 00:39:19.467 clat percentiles (usec): 00:39:19.467 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[11863], 20.00th=[12387], 00:39:19.467 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13698], 00:39:19.467 | 70.00th=[17957], 80.00th=[19268], 90.00th=[35390], 95.00th=[51119], 00:39:19.467 | 99.00th=[73925], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:39:19.467 | 99.99th=[79168] 00:39:19.467 bw ( KiB/s): min=12288, max=13440, per=21.13%, avg=12864.00, stdev=814.59, samples=2 00:39:19.467 iops : min= 3072, max= 3360, avg=3216.00, stdev=203.65, samples=2 00:39:19.467 lat (msec) : 10=3.40%, 20=72.19%, 50=17.04%, 100=7.37% 00:39:19.467 cpu : usr=2.58%, sys=4.77%, ctx=331, majf=0, minf=2 00:39:19.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:39:19.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:19.467 issued rwts: total=3072,3344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:19.467 job3: (groupid=0, jobs=1): err= 0: pid=938740: Sun Nov 17 19:00:05 2024 00:39:19.467 read: IOPS=2021, BW=8087KiB/s (8281kB/s)(8192KiB/1013msec) 00:39:19.467 slat (usec): min=3, max=15752, avg=165.36, stdev=991.60 00:39:19.467 clat (usec): min=7345, max=67014, avg=18835.20, stdev=9372.42 00:39:19.467 lat (usec): min=7353, max=67032, avg=19000.55, stdev=9466.44 00:39:19.467 clat percentiles (usec): 00:39:19.467 | 1.00th=[10683], 5.00th=[11863], 10.00th=[12780], 20.00th=[13042], 00:39:19.467 | 30.00th=[13698], 40.00th=[15008], 50.00th=[16057], 60.00th=[16712], 00:39:19.467 | 70.00th=[17957], 80.00th=[22152], 90.00th=[30278], 95.00th=[40633], 00:39:19.467 | 99.00th=[58459], 99.50th=[65799], 99.90th=[66847], 99.95th=[66847], 00:39:19.467 | 99.99th=[66847] 00:39:19.467 write: IOPS=2248, BW=8995KiB/s (9211kB/s)(9112KiB/1013msec); 0 zone resets 00:39:19.467 slat (usec): min=4, max=52325, avg=280.84, stdev=1558.50 00:39:19.467 clat (usec): min=4842, max=67211, avg=35173.70, stdev=16787.69 00:39:19.467 lat (usec): min=4850, max=87318, avg=35454.54, stdev=16922.36 00:39:19.467 clat percentiles (usec): 00:39:19.467 | 1.00th=[ 7963], 5.00th=[11207], 10.00th=[12518], 20.00th=[15270], 00:39:19.467 | 30.00th=[20579], 40.00th=[34341], 50.00th=[36439], 60.00th=[39060], 00:39:19.467 | 70.00th=[47973], 80.00th=[54264], 90.00th=[56361], 95.00th=[57934], 00:39:19.467 | 99.00th=[65274], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:39:19.467 | 99.99th=[67634] 00:39:19.467 bw ( KiB/s): min= 7496, max= 9704, per=14.12%, avg=8600.00, stdev=1561.29, samples=2 00:39:19.467 iops : min= 1874, max= 2426, avg=2150.00, stdev=390.32, samples=2 00:39:19.467 lat (msec) : 10=1.55%, 20=50.05%, 50=32.76%, 100=15.65% 00:39:19.467 cpu : usr=2.77%, sys=4.94%, ctx=257, majf=0, minf=1 00:39:19.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:39:19.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:19.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:19.467 issued rwts: total=2048,2278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:19.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:19.467 00:39:19.467 Run status group 0 (all jobs): 00:39:19.467 READ: bw=55.6MiB/s (58.3MB/s), 8087KiB/s-21.4MiB/s (8281kB/s-22.5MB/s), io=57.9MiB (60.7MB), run=1006-1042msec 00:39:19.467 WRITE: bw=59.5MiB/s (62.4MB/s), 8995KiB/s-23.0MiB/s (9211kB/s-24.2MB/s), io=62.0MiB (65.0MB), run=1006-1042msec 00:39:19.467 00:39:19.467 Disk stats (read/write): 00:39:19.467 nvme0n1: ios=4872/5120, merge=0/0, ticks=29381/26586, in_queue=55967, util=87.58% 00:39:19.467 nvme0n2: ios=3628/3687, merge=0/0, ticks=25379/19932, in_queue=45311, util=98.27% 00:39:19.467 nvme0n3: ios=2924/3072, merge=0/0, ticks=15985/13799, in_queue=29784, util=91.36% 00:39:19.467 nvme0n4: ios=1596/1886, merge=0/0, ticks=30366/65057, in_queue=95423, util=100.00% 00:39:19.467 19:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:19.467 [global] 00:39:19.467 thread=1 00:39:19.467 invalidate=1 00:39:19.467 rw=randwrite 00:39:19.467 time_based=1 00:39:19.467 runtime=1 00:39:19.467 ioengine=libaio 00:39:19.467 direct=1 00:39:19.467 bs=4096 00:39:19.467 iodepth=128 00:39:19.467 norandommap=0 00:39:19.467 numjobs=1 00:39:19.467 00:39:19.467 verify_dump=1 00:39:19.467 verify_backlog=512 00:39:19.467 verify_state_save=0 00:39:19.467 do_verify=1 00:39:19.467 verify=crc32c-intel 00:39:19.467 [job0] 00:39:19.467 filename=/dev/nvme0n1 00:39:19.467 [job1] 00:39:19.467 filename=/dev/nvme0n2 00:39:19.467 [job2] 00:39:19.467 filename=/dev/nvme0n3 00:39:19.467 [job3] 00:39:19.467 filename=/dev/nvme0n4 00:39:19.468 Could not set queue depth (nvme0n1) 00:39:19.468 Could not set queue depth (nvme0n2) 00:39:19.468 Could not set queue depth (nvme0n3) 00:39:19.468 Could not set queue depth (nvme0n4) 00:39:19.468 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:19.468 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:19.468 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:19.468 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:19.468 fio-3.35 00:39:19.468 Starting 4 threads 00:39:20.844 00:39:20.844 job0: (groupid=0, jobs=1): err= 0: pid=938963: Sun Nov 17 19:00:07 2024 00:39:20.844 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:39:20.844 slat (nsec): min=1934, max=8502.8k, avg=107168.16, stdev=614993.25 00:39:20.844 clat (usec): min=7693, max=34372, avg=14298.15, stdev=5974.99 00:39:20.844 lat (usec): min=7696, max=37331, avg=14405.31, stdev=5994.68 00:39:20.844 clat percentiles (usec): 00:39:20.844 | 1.00th=[ 8717], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10552], 00:39:20.844 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:39:20.844 | 70.00th=[12780], 80.00th=[18482], 90.00th=[25560], 95.00th=[27132], 00:39:20.844 | 99.00th=[30540], 99.50th=[32113], 99.90th=[34341], 99.95th=[34341], 00:39:20.844 | 99.99th=[34341] 00:39:20.844 write: IOPS=4403, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1002msec); 0 zone resets 00:39:20.844 slat (usec): min=2, max=27163, avg=121.19, stdev=797.20 00:39:20.844 clat (usec): min=1719, max=78884, avg=15457.03, stdev=10052.96 00:39:20.844 lat (usec): min=1730, max=78889, avg=15578.21, stdev=10101.78 00:39:20.844 clat percentiles (usec): 00:39:20.844 | 1.00th=[ 4621], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10683], 00:39:20.844 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:39:20.844 | 70.00th=[12780], 80.00th=[21365], 90.00th=[25035], 95.00th=[33817], 00:39:20.844 | 99.00th=[65274], 99.50th=[72877], 99.90th=[72877], 99.95th=[79168], 00:39:20.844 | 99.99th=[79168] 00:39:20.844 bw ( KiB/s): min=12288, max=21992, per=25.29%, avg=17140.00, stdev=6861.76, samples=2 00:39:20.844 iops : min= 3072, max= 5498, avg=4285.00, stdev=1715.44, samples=2 00:39:20.845 lat (msec) : 2=0.24%, 4=0.09%, 10=13.47%, 20=65.91%, 50=19.18% 00:39:20.845 lat (msec) : 100=1.10% 00:39:20.845 cpu : usr=2.80%, sys=4.90%, ctx=485, majf=0, minf=1 00:39:20.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:20.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:20.845 issued rwts: total=4096,4412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:20.845 job1: (groupid=0, jobs=1): err= 0: pid=938965: Sun Nov 17 19:00:07 2024 00:39:20.845 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:39:20.845 slat (usec): min=2, max=10931, avg=109.57, stdev=712.25 00:39:20.845 clat (usec): min=5776, max=26732, avg=14697.14, stdev=3415.53 00:39:20.845 lat (usec): min=5781, max=26735, avg=14806.72, stdev=3480.99 00:39:20.845 clat percentiles (usec): 00:39:20.845 | 1.00th=[ 6718], 5.00th=[10028], 10.00th=[10290], 20.00th=[11076], 00:39:20.845 | 30.00th=[13173], 40.00th=[14353], 50.00th=[14615], 60.00th=[15139], 00:39:20.845 | 70.00th=[16188], 80.00th=[17171], 90.00th=[18744], 95.00th=[20841], 00:39:20.845 | 99.00th=[22938], 99.50th=[25297], 99.90th=[26608], 99.95th=[26608], 00:39:20.845 | 99.99th=[26608] 00:39:20.845 write: IOPS=4383, BW=17.1MiB/s (18.0MB/s)(17.2MiB/1003msec); 0 zone resets 00:39:20.845 slat (usec): min=3, max=24487, avg=115.06, stdev=742.95 00:39:20.845 clat (usec): min=694, max=37541, avg=15171.49, stdev=5127.55 00:39:20.845 lat (usec): min=708, max=37546, avg=15286.55, stdev=5172.05 00:39:20.845 clat percentiles (usec): 00:39:20.845 | 1.00th=[ 4359], 5.00th=[ 7439], 10.00th=[ 9896], 20.00th=[11207], 00:39:20.845 | 30.00th=[13042], 40.00th=[14222], 50.00th=[14746], 60.00th=[15270], 00:39:20.845 | 70.00th=[16057], 80.00th=[18220], 90.00th=[21365], 95.00th=[26608], 00:39:20.845 | 99.00th=[31065], 99.50th=[31065], 99.90th=[37487], 99.95th=[37487], 00:39:20.845 | 99.99th=[37487] 00:39:20.845 bw ( KiB/s): min=16736, max=17424, per=25.20%, avg=17080.00, stdev=486.49, samples=2 00:39:20.845 iops : min= 4184, max= 4356, avg=4270.00, stdev=121.62, samples=2 00:39:20.845 lat (usec) : 750=0.02%, 1000=0.01% 00:39:20.845 lat (msec) : 2=0.01%, 4=0.21%, 10=9.47%, 20=81.01%, 50=9.27% 00:39:20.845 cpu : usr=2.59%, sys=4.39%, ctx=293, majf=0, minf=1 00:39:20.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:20.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:20.845 issued rwts: total=4096,4397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:20.845 job2: (groupid=0, jobs=1): err= 0: pid=938967: Sun Nov 17 19:00:07 2024 00:39:20.845 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:39:20.845 slat (usec): min=2, max=45380, avg=144.51, stdev=1197.84 00:39:20.845 clat (usec): min=6866, max=66708, avg=19524.76, stdev=12055.97 00:39:20.845 lat (usec): min=6871, max=66737, avg=19669.27, stdev=12113.64 00:39:20.845 clat percentiles (usec): 00:39:20.845 | 1.00th=[ 9110], 5.00th=[11076], 10.00th=[11600], 20.00th=[11731], 00:39:20.845 | 30.00th=[12125], 40.00th=[12518], 50.00th=[14091], 60.00th=[16712], 00:39:20.845 | 70.00th=[21627], 80.00th=[26608], 90.00th=[32113], 95.00th=[40633], 00:39:20.845 | 99.00th=[66323], 99.50th=[66323], 99.90th=[66847], 99.95th=[66847], 00:39:20.845 | 99.99th=[66847] 00:39:20.845 write: IOPS=3545, BW=13.9MiB/s (14.5MB/s)(13.9MiB/1004msec); 0 zone resets 00:39:20.845 slat (usec): min=2, max=15642, avg=150.01, stdev=816.28 00:39:20.845 clat (usec): min=2323, max=42465, avg=18929.63, stdev=10546.71 00:39:20.845 lat (usec): min=5914, max=42487, avg=19079.65, stdev=10633.51 00:39:20.845 clat percentiles (usec): 00:39:20.845 | 1.00th=[ 9110], 5.00th=[11207], 10.00th=[11731], 20.00th=[11863], 00:39:20.845 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13173], 60.00th=[13435], 00:39:20.845 | 70.00th=[18220], 80.00th=[33817], 90.00th=[38536], 95.00th=[39584], 00:39:20.845 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:39:20.845 | 99.99th=[42206] 00:39:20.845 bw ( KiB/s): min= 6984, max=20480, per=20.26%, avg=13732.00, stdev=9543.11, samples=2 00:39:20.845 iops : min= 1746, max= 5120, avg=3433.00, stdev=2385.78, samples=2 00:39:20.845 lat (msec) : 4=0.02%, 10=2.11%, 20=68.26%, 50=27.65%, 100=1.96% 00:39:20.845 cpu : usr=3.19%, sys=3.69%, ctx=260, majf=0, minf=1 00:39:20.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:20.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:20.845 issued rwts: total=3072,3560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:20.845 job3: (groupid=0, jobs=1): err= 0: pid=938968: Sun Nov 17 19:00:07 2024 00:39:20.845 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:39:20.845 slat (usec): min=2, max=7109, avg=108.23, stdev=675.55 00:39:20.845 clat (usec): min=6884, max=21630, avg=13895.73, stdev=2369.16 00:39:20.845 lat (usec): min=6898, max=22966, avg=14003.96, stdev=2411.55 00:39:20.845 clat percentiles (usec): 00:39:20.845 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[11600], 20.00th=[12256], 00:39:20.845 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13304], 60.00th=[13960], 00:39:20.845 | 70.00th=[14746], 80.00th=[15664], 90.00th=[17695], 95.00th=[18482], 00:39:20.845 | 99.00th=[19792], 99.50th=[20055], 99.90th=[21365], 99.95th=[21627], 00:39:20.845 | 99.99th=[21627] 00:39:20.845 write: IOPS=4633, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1002msec); 0 zone resets 00:39:20.845 slat (usec): min=3, max=10044, avg=99.76, stdev=629.24 00:39:20.845 clat (usec): min=525, max=22151, avg=13590.64, stdev=1921.55 00:39:20.845 lat (usec): min=677, max=22165, avg=13690.40, stdev=1968.75 00:39:20.845 clat percentiles (usec): 00:39:20.845 | 1.00th=[ 6980], 5.00th=[10814], 10.00th=[11863], 20.00th=[12649], 00:39:20.845 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13566], 60.00th=[13960], 00:39:20.845 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15270], 95.00th=[16057], 00:39:20.845 | 99.00th=[19268], 99.50th=[19792], 99.90th=[21890], 99.95th=[21890], 00:39:20.845 | 99.99th=[22152] 00:39:20.845 bw ( KiB/s): min=16384, max=20480, per=27.20%, avg=18432.00, stdev=2896.31, samples=2 00:39:20.845 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:39:20.845 lat (usec) : 750=0.05% 00:39:20.845 lat (msec) : 4=0.08%, 10=3.55%, 20=95.89%, 50=0.43% 00:39:20.845 cpu : usr=4.90%, sys=7.79%, ctx=397, majf=0, minf=1 00:39:20.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:20.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:20.845 issued rwts: total=4608,4643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:20.845 00:39:20.845 Run status group 0 (all jobs): 00:39:20.845 READ: bw=61.8MiB/s (64.8MB/s), 12.0MiB/s-18.0MiB/s (12.5MB/s-18.8MB/s), io=62.0MiB (65.0MB), run=1002-1004msec 00:39:20.845 WRITE: bw=66.2MiB/s (69.4MB/s), 13.9MiB/s-18.1MiB/s (14.5MB/s-19.0MB/s), io=66.5MiB (69.7MB), run=1002-1004msec 00:39:20.845 00:39:20.845 Disk stats (read/write): 00:39:20.845 nvme0n1: ios=3268/3584, merge=0/0, ticks=13078/17385, in_queue=30463, util=91.58% 00:39:20.845 nvme0n2: ios=3626/3648, merge=0/0, ticks=24600/21909, in_queue=46509, util=97.06% 00:39:20.845 nvme0n3: ios=3088/3072, merge=0/0, ticks=22501/23723, in_queue=46224, util=91.04% 00:39:20.845 nvme0n4: ios=3817/4096, merge=0/0, ticks=25615/28671, in_queue=54286, util=95.38% 00:39:20.845 19:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:20.845 19:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=939119 00:39:20.845 19:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:20.845 19:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:20.845 [global] 00:39:20.845 thread=1 00:39:20.845 invalidate=1 00:39:20.845 rw=read 00:39:20.845 time_based=1 00:39:20.845 runtime=10 00:39:20.845 ioengine=libaio 00:39:20.845 direct=1 00:39:20.845 bs=4096 00:39:20.845 iodepth=1 00:39:20.845 norandommap=1 00:39:20.845 numjobs=1 00:39:20.845 00:39:20.845 [job0] 00:39:20.845 filename=/dev/nvme0n1 00:39:20.845 [job1] 00:39:20.845 filename=/dev/nvme0n2 00:39:20.845 [job2] 00:39:20.845 filename=/dev/nvme0n3 00:39:20.845 [job3] 00:39:20.845 filename=/dev/nvme0n4 00:39:20.845 Could not set queue depth (nvme0n1) 00:39:20.845 Could not set queue depth (nvme0n2) 00:39:20.845 Could not set queue depth (nvme0n3) 00:39:20.845 Could not set queue depth (nvme0n4) 00:39:21.105 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:21.105 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:21.105 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:21.105 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:21.105 fio-3.35 00:39:21.105 Starting 4 threads 00:39:23.640 19:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:23.928 19:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:24.186 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=29593600, buflen=4096 00:39:24.186 fio: pid=939470, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:24.444 19:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:24.444 19:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:24.444 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11022336, buflen=4096 00:39:24.444 fio: pid=939469, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:24.702 19:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:24.702 19:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:24.702 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5431296, buflen=4096 00:39:24.702 fio: pid=939467, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:24.960 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=21753856, buflen=4096 00:39:24.960 fio: pid=939468, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:24.960 19:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:24.960 19:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:24.960 00:39:24.960 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=939467: Sun Nov 17 19:00:11 2024 00:39:24.960 read: IOPS=379, BW=1515KiB/s (1552kB/s)(5304KiB/3500msec) 00:39:24.960 slat (usec): min=4, max=16922, avg=26.33, stdev=464.21 00:39:24.960 clat (usec): min=175, max=41298, avg=2592.05, stdev=9449.19 00:39:24.960 lat (usec): min=192, max=58018, avg=2618.39, stdev=9513.22 00:39:24.960 clat percentiles (usec): 00:39:24.960 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:39:24.960 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 241], 00:39:24.960 | 70.00th=[ 273], 80.00th=[ 343], 90.00th=[ 461], 95.00th=[40633], 00:39:24.960 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:24.960 | 99.99th=[41157] 00:39:24.960 bw ( KiB/s): min= 112, max= 7168, per=9.96%, avg=1750.67, stdev=2825.25, samples=6 00:39:24.960 iops : min= 28, max= 1792, avg=437.67, stdev=706.31, samples=6 00:39:24.960 lat (usec) : 250=62.92%, 500=29.09%, 750=2.19% 00:39:24.960 lat (msec) : 50=5.73% 00:39:24.960 cpu : usr=0.14%, sys=0.71%, ctx=1332, majf=0, minf=1 00:39:24.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:24.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:24.960 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:24.960 issued rwts: total=1327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:24.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:24.960 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=939468: Sun Nov 17 19:00:11 2024 00:39:24.960 read: IOPS=1409, BW=5637KiB/s (5772kB/s)(20.7MiB/3769msec) 00:39:24.960 slat (usec): min=4, max=34533, avg=23.68, stdev=587.43 00:39:24.960 clat (usec): min=191, max=41993, avg=680.06, stdev=4237.58 00:39:24.960 lat (usec): min=196, max=42011, avg=703.75, stdev=4278.22 00:39:24.960 clat percentiles (usec): 00:39:24.960 | 1.00th=[ 198], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 204], 00:39:24.960 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 235], 60.00th=[ 245], 00:39:24.960 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 289], 00:39:24.960 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:24.960 | 99.99th=[42206] 00:39:24.960 bw ( KiB/s): min= 104, max=15355, per=27.74%, avg=4873.57, stdev=5812.63, samples=7 00:39:24.960 iops : min= 26, max= 3838, avg=1218.29, stdev=1452.93, samples=7 00:39:24.960 lat (usec) : 250=70.20%, 500=28.39%, 750=0.30% 00:39:24.960 lat (msec) : 50=1.09% 00:39:24.960 cpu : usr=0.32%, sys=1.09%, ctx=5323, majf=0, minf=1 00:39:24.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:24.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:24.960 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:24.960 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:24.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:24.960 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=939469: Sun Nov 17 19:00:11 2024 00:39:24.960 read: IOPS=833, BW=3335KiB/s (3415kB/s)(10.5MiB/3228msec) 00:39:24.960 slat (nsec): min=4716, max=69714, avg=14042.78, stdev=10643.94 00:39:24.960 clat (usec): min=202, max=60837, avg=1173.64, stdev=6013.51 00:39:24.960 lat (usec): min=210, max=60853, avg=1187.68, stdev=6014.30 00:39:24.960 clat percentiles (usec): 00:39:24.960 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:39:24.961 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 265], 00:39:24.961 | 70.00th=[ 289], 80.00th=[ 334], 90.00th=[ 383], 95.00th=[ 412], 00:39:24.961 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:39:24.961 | 99.99th=[61080] 00:39:24.961 bw ( KiB/s): min= 96, max=11088, per=20.37%, avg=3578.67, stdev=5380.01, samples=6 00:39:24.961 iops : min= 24, max= 2772, avg=894.67, stdev=1345.00, samples=6 00:39:24.961 lat (usec) : 250=45.54%, 500=51.93%, 750=0.19%, 1000=0.11% 00:39:24.961 lat (msec) : 50=2.15%, 100=0.04% 00:39:24.961 cpu : usr=0.50%, sys=1.27%, ctx=2694, majf=0, minf=2 00:39:24.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:24.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:24.961 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:24.961 issued rwts: total=2692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:24.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:24.961 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=939470: Sun Nov 17 19:00:11 2024 00:39:24.961 read: IOPS=2463, BW=9853KiB/s (10.1MB/s)(28.2MiB/2933msec) 00:39:24.961 slat (nsec): min=4347, max=53504, avg=9544.43, stdev=5441.23 00:39:24.961 clat (usec): min=196, max=42035, avg=391.07, stdev=2457.43 00:39:24.961 lat (usec): min=205, max=42049, avg=400.62, stdev=2458.20 00:39:24.961 clat percentiles (usec): 00:39:24.961 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 221], 00:39:24.961 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:39:24.961 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 310], 00:39:24.961 | 99.00th=[ 474], 99.50th=[ 553], 99.90th=[41681], 99.95th=[42206], 00:39:24.961 | 99.99th=[42206] 00:39:24.961 bw ( KiB/s): min= 544, max=15984, per=65.71%, avg=11544.00, stdev=6386.30, samples=5 00:39:24.961 iops : min= 136, max= 3996, avg=2886.00, stdev=1596.58, samples=5 00:39:24.961 lat (usec) : 250=76.70%, 500=22.54%, 750=0.36%, 1000=0.01% 00:39:24.961 lat (msec) : 4=0.01%, 50=0.36% 00:39:24.961 cpu : usr=1.02%, sys=2.63%, ctx=7226, majf=0, minf=2 00:39:24.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:24.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:24.961 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:24.961 issued rwts: total=7226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:24.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:24.961 00:39:24.961 Run status group 0 (all jobs): 00:39:24.961 READ: bw=17.2MiB/s (18.0MB/s), 1515KiB/s-9853KiB/s (1552kB/s-10.1MB/s), io=64.7MiB (67.8MB), run=2933-3769msec 00:39:24.961 00:39:24.961 Disk stats (read/write): 00:39:24.961 nvme0n1: ios=1369/0, merge=0/0, ticks=4182/0, in_queue=4182, util=99.23% 00:39:24.961 nvme0n2: ios=4658/0, merge=0/0, ticks=3668/0, in_queue=3668, util=97.48% 00:39:24.961 nvme0n3: ios=2713/0, merge=0/0, ticks=3197/0, in_queue=3197, util=99.78% 00:39:24.961 nvme0n4: ios=7223/0, merge=0/0, ticks=2692/0, in_queue=2692, util=96.74% 00:39:25.219 19:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:25.219 19:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:25.477 19:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:25.477 19:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:25.735 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:25.735 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:25.993 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:25.993 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:26.251 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:26.251 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 939119 00:39:26.251 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:26.251 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:26.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:26.509 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:26.509 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:39:26.509 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:26.509 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:26.509 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:26.509 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:26.509 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:39:26.509 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:26.509 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:26.509 nvmf hotplug test: fio failed as expected 00:39:26.509 19:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:26.767 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:26.767 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:26.767 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:26.767 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:26.767 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:26.767 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:26.767 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:26.767 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:26.767 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:26.768 rmmod nvme_tcp 00:39:26.768 rmmod nvme_fabrics 00:39:26.768 rmmod nvme_keyring 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 937107 ']' 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 937107 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 937107 ']' 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 937107 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 937107 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 937107' 00:39:26.768 killing process with pid 937107 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 937107 00:39:26.768 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 937107 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:27.026 19:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:28.933 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:28.933 00:39:28.933 real 0m23.639s 00:39:28.933 user 1m6.580s 00:39:28.933 sys 0m10.386s 00:39:28.933 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:28.933 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:28.933 ************************************ 00:39:28.933 END TEST nvmf_fio_target 00:39:28.933 ************************************ 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:29.192 ************************************ 00:39:29.192 START TEST nvmf_bdevio 00:39:29.192 ************************************ 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:29.192 * Looking for test storage... 00:39:29.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:29.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.192 --rc genhtml_branch_coverage=1 00:39:29.192 --rc genhtml_function_coverage=1 00:39:29.192 --rc genhtml_legend=1 00:39:29.192 --rc geninfo_all_blocks=1 00:39:29.192 --rc geninfo_unexecuted_blocks=1 00:39:29.192 00:39:29.192 ' 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:29.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.192 --rc genhtml_branch_coverage=1 00:39:29.192 --rc genhtml_function_coverage=1 00:39:29.192 --rc genhtml_legend=1 00:39:29.192 --rc geninfo_all_blocks=1 00:39:29.192 --rc geninfo_unexecuted_blocks=1 00:39:29.192 00:39:29.192 ' 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:29.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.192 --rc genhtml_branch_coverage=1 00:39:29.192 --rc genhtml_function_coverage=1 00:39:29.192 --rc genhtml_legend=1 00:39:29.192 --rc geninfo_all_blocks=1 00:39:29.192 --rc geninfo_unexecuted_blocks=1 00:39:29.192 00:39:29.192 ' 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:29.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:29.192 --rc genhtml_branch_coverage=1 00:39:29.192 --rc genhtml_function_coverage=1 00:39:29.192 --rc genhtml_legend=1 00:39:29.192 --rc geninfo_all_blocks=1 00:39:29.192 --rc geninfo_unexecuted_blocks=1 00:39:29.192 00:39:29.192 ' 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:29.192 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:29.193 19:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:31.727 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:31.727 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:31.727 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:31.727 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:31.728 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:31.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:31.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:39:31.728 00:39:31.728 --- 10.0.0.2 ping statistics --- 00:39:31.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:31.728 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:31.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:31.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:39:31.728 00:39:31.728 --- 10.0.0.1 ping statistics --- 00:39:31.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:31.728 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=942435 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 942435 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 942435 ']' 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:31.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:31.728 19:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:31.728 [2024-11-17 19:00:17.988054] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:31.728 [2024-11-17 19:00:17.989134] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:31.728 [2024-11-17 19:00:17.989186] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:31.728 [2024-11-17 19:00:18.062600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:31.728 [2024-11-17 19:00:18.109831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:31.728 [2024-11-17 19:00:18.109883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:31.728 [2024-11-17 19:00:18.109908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:31.728 [2024-11-17 19:00:18.109919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:31.728 [2024-11-17 19:00:18.109929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:31.728 [2024-11-17 19:00:18.111605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:31.728 [2024-11-17 19:00:18.111705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:31.728 [2024-11-17 19:00:18.111758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:31.728 [2024-11-17 19:00:18.111761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:31.728 [2024-11-17 19:00:18.195776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:31.728 [2024-11-17 19:00:18.196012] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:31.728 [2024-11-17 19:00:18.196262] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:31.728 [2024-11-17 19:00:18.196846] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:31.728 [2024-11-17 19:00:18.197104] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:31.728 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:31.728 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:39:31.728 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:31.728 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:31.728 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:31.728 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:31.728 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:31.728 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.728 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:31.728 [2024-11-17 19:00:18.252472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:31.728 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.729 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:31.729 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.729 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:31.988 Malloc0 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:31.988 [2024-11-17 19:00:18.324772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:31.988 { 00:39:31.988 "params": { 00:39:31.988 "name": "Nvme$subsystem", 00:39:31.988 "trtype": "$TEST_TRANSPORT", 00:39:31.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:31.988 "adrfam": "ipv4", 00:39:31.988 "trsvcid": "$NVMF_PORT", 00:39:31.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:31.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:31.988 "hdgst": ${hdgst:-false}, 00:39:31.988 "ddgst": ${ddgst:-false} 00:39:31.988 }, 00:39:31.988 "method": "bdev_nvme_attach_controller" 00:39:31.988 } 00:39:31.988 EOF 00:39:31.988 )") 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:39:31.988 19:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:31.988 "params": { 00:39:31.988 "name": "Nvme1", 00:39:31.988 "trtype": "tcp", 00:39:31.988 "traddr": "10.0.0.2", 00:39:31.988 "adrfam": "ipv4", 00:39:31.988 "trsvcid": "4420", 00:39:31.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:31.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:31.988 "hdgst": false, 00:39:31.988 "ddgst": false 00:39:31.988 }, 00:39:31.988 "method": "bdev_nvme_attach_controller" 00:39:31.988 }' 00:39:31.989 [2024-11-17 19:00:18.377150] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:31.989 [2024-11-17 19:00:18.377218] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942466 ] 00:39:31.989 [2024-11-17 19:00:18.447347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:31.989 [2024-11-17 19:00:18.499451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:31.989 [2024-11-17 19:00:18.499506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:31.989 [2024-11-17 19:00:18.499510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.247 I/O targets: 00:39:32.247 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:32.247 00:39:32.247 00:39:32.247 CUnit - A unit testing framework for C - Version 2.1-3 00:39:32.247 http://cunit.sourceforge.net/ 00:39:32.247 00:39:32.247 00:39:32.247 Suite: bdevio tests on: Nvme1n1 00:39:32.247 Test: blockdev write read block ...passed 00:39:32.247 Test: blockdev write zeroes read block ...passed 00:39:32.247 Test: blockdev write zeroes read no split ...passed 00:39:32.507 Test: blockdev write zeroes read split ...passed 00:39:32.507 Test: blockdev write zeroes read split partial ...passed 00:39:32.507 Test: blockdev reset ...[2024-11-17 19:00:18.863248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:32.507 [2024-11-17 19:00:18.863356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99bac0 (9): Bad file descriptor 00:39:32.507 [2024-11-17 19:00:18.996751] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:39:32.507 passed 00:39:32.507 Test: blockdev write read 8 blocks ...passed 00:39:32.507 Test: blockdev write read size > 128k ...passed 00:39:32.507 Test: blockdev write read invalid size ...passed 00:39:32.767 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:32.767 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:32.767 Test: blockdev write read max offset ...passed 00:39:32.767 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:32.767 Test: blockdev writev readv 8 blocks ...passed 00:39:32.767 Test: blockdev writev readv 30 x 1block ...passed 00:39:32.767 Test: blockdev writev readv block ...passed 00:39:32.767 Test: blockdev writev readv size > 128k ...passed 00:39:32.767 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:32.767 Test: blockdev comparev and writev ...[2024-11-17 19:00:19.251956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.767 [2024-11-17 19:00:19.252002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:32.767 [2024-11-17 19:00:19.252035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.767 [2024-11-17 19:00:19.252054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:32.767 [2024-11-17 19:00:19.252454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.767 [2024-11-17 19:00:19.252480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:32.767 [2024-11-17 19:00:19.252504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.767 [2024-11-17 19:00:19.252523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:32.767 [2024-11-17 19:00:19.252914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.767 [2024-11-17 19:00:19.252940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:32.767 [2024-11-17 19:00:19.252963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.767 [2024-11-17 19:00:19.252981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:32.767 [2024-11-17 19:00:19.253341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.767 [2024-11-17 19:00:19.253371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:32.767 [2024-11-17 19:00:19.253394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:32.767 [2024-11-17 19:00:19.253411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:32.767 passed 00:39:32.767 Test: blockdev nvme passthru rw ...passed 00:39:32.767 Test: blockdev nvme passthru vendor specific ...[2024-11-17 19:00:19.334922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:32.767 [2024-11-17 19:00:19.334950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:32.767 [2024-11-17 19:00:19.335098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:32.767 [2024-11-17 19:00:19.335123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:32.767 [2024-11-17 19:00:19.335276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:32.767 [2024-11-17 19:00:19.335299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:32.767 [2024-11-17 19:00:19.335449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:32.767 [2024-11-17 19:00:19.335473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:32.767 passed 00:39:33.025 Test: blockdev nvme admin passthru ...passed 00:39:33.025 Test: blockdev copy ...passed 00:39:33.025 00:39:33.025 Run Summary: Type Total Ran Passed Failed Inactive 00:39:33.025 suites 1 1 n/a 0 0 00:39:33.025 tests 23 23 23 0 0 00:39:33.025 asserts 152 152 152 0 n/a 00:39:33.025 00:39:33.025 Elapsed time = 1.354 seconds 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.025 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.025 rmmod nvme_tcp 00:39:33.025 rmmod nvme_fabrics 00:39:33.025 rmmod nvme_keyring 00:39:33.283 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.283 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:33.283 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:33.283 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 942435 ']' 00:39:33.283 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 942435 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 942435 ']' 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 942435 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942435 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942435' 00:39:33.284 killing process with pid 942435 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 942435 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 942435 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:33.284 19:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:35.825 19:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:35.825 00:39:35.825 real 0m6.352s 00:39:35.825 user 0m8.556s 00:39:35.825 sys 0m2.549s 00:39:35.825 19:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:35.825 19:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:35.825 ************************************ 00:39:35.825 END TEST nvmf_bdevio 00:39:35.825 ************************************ 00:39:35.825 19:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:35.825 00:39:35.825 real 3m54.399s 00:39:35.825 user 8m49.677s 00:39:35.825 sys 1m26.192s 00:39:35.825 19:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:35.825 19:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:35.825 ************************************ 00:39:35.825 END TEST nvmf_target_core_interrupt_mode 00:39:35.825 ************************************ 00:39:35.825 19:00:21 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:35.825 19:00:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:35.825 19:00:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:35.825 19:00:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:35.825 ************************************ 00:39:35.825 START TEST nvmf_interrupt 00:39:35.825 ************************************ 00:39:35.825 19:00:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:35.825 * Looking for test storage... 00:39:35.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:35.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.825 --rc genhtml_branch_coverage=1 00:39:35.825 --rc genhtml_function_coverage=1 00:39:35.825 --rc genhtml_legend=1 00:39:35.825 --rc geninfo_all_blocks=1 00:39:35.825 --rc geninfo_unexecuted_blocks=1 00:39:35.825 00:39:35.825 ' 00:39:35.825 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:35.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.826 --rc genhtml_branch_coverage=1 00:39:35.826 --rc genhtml_function_coverage=1 00:39:35.826 --rc genhtml_legend=1 00:39:35.826 --rc geninfo_all_blocks=1 00:39:35.826 --rc geninfo_unexecuted_blocks=1 00:39:35.826 00:39:35.826 ' 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:35.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.826 --rc genhtml_branch_coverage=1 00:39:35.826 --rc genhtml_function_coverage=1 00:39:35.826 --rc genhtml_legend=1 00:39:35.826 --rc geninfo_all_blocks=1 00:39:35.826 --rc geninfo_unexecuted_blocks=1 00:39:35.826 00:39:35.826 ' 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:35.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.826 --rc genhtml_branch_coverage=1 00:39:35.826 --rc genhtml_function_coverage=1 00:39:35.826 --rc genhtml_legend=1 00:39:35.826 --rc geninfo_all_blocks=1 00:39:35.826 --rc geninfo_unexecuted_blocks=1 00:39:35.826 00:39:35.826 ' 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:35.826 19:00:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:37.731 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:37.731 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:37.731 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:37.731 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:37.731 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:37.732 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:37.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:37.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:39:37.732 00:39:37.732 --- 10.0.0.2 ping statistics --- 00:39:37.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.732 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:39:37.732 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:37.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:37.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:39:37.732 00:39:37.732 --- 10.0.0.1 ping statistics --- 00:39:37.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.732 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:39:37.732 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:37.732 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:39:37.732 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:37.732 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:37.732 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:37.732 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:37.732 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:37.732 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:37.732 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=944551 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 944551 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 944551 ']' 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:37.991 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:37.991 [2024-11-17 19:00:24.371764] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:37.991 [2024-11-17 19:00:24.372825] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:37.991 [2024-11-17 19:00:24.372893] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:37.991 [2024-11-17 19:00:24.450197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:37.991 [2024-11-17 19:00:24.496902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:37.991 [2024-11-17 19:00:24.496978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:37.991 [2024-11-17 19:00:24.496993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:37.991 [2024-11-17 19:00:24.497004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:37.991 [2024-11-17 19:00:24.497013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:37.991 [2024-11-17 19:00:24.501702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.991 [2024-11-17 19:00:24.501713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.250 [2024-11-17 19:00:24.587639] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:38.250 [2024-11-17 19:00:24.587688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:38.250 [2024-11-17 19:00:24.587934] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:38.250 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:38.250 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:39:38.250 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:38.250 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:38.250 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.250 19:00:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.250 19:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:38.250 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:38.250 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:38.250 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:38.251 5000+0 records in 00:39:38.251 5000+0 records out 00:39:38.251 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0120165 s, 852 MB/s 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.251 AIO0 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.251 [2024-11-17 19:00:24.678345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.251 [2024-11-17 19:00:24.702619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 944551 0 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 944551 0 idle 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=944551 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 944551 -w 256 00:39:38.251 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:38.510 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 944551 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0' 00:39:38.510 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 944551 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0 00:39:38.510 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:38.510 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:38.510 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:38.510 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:38.510 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:38.510 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:38.510 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:38.510 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 944551 1 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 944551 1 idle 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=944551 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 944551 -w 256 00:39:38.511 19:00:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 944556 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1' 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 944556 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.00 reactor_1 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=944716 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 944551 0 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 944551 0 busy 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=944551 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 944551 -w 256 00:39:38.511 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 944551 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.46 reactor_0' 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 944551 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.46 reactor_0 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 944551 1 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 944551 1 busy 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=944551 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 944551 -w 256 00:39:38.771 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:39.030 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 944556 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.27 reactor_1' 00:39:39.031 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 944556 root 20 0 128.2g 48384 34560 R 99.9 0.1 0:00.27 reactor_1 00:39:39.031 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:39.031 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:39.031 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:39.031 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:39.031 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:39.031 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:39.031 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:39.031 19:00:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:39.031 19:00:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 944716 00:39:49.014 Initializing NVMe Controllers 00:39:49.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:49.014 Controller IO queue size 256, less than required. 00:39:49.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:49.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:49.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:49.014 Initialization complete. Launching workers. 00:39:49.014 ======================================================== 00:39:49.014 Latency(us) 00:39:49.014 Device Information : IOPS MiB/s Average min max 00:39:49.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13863.37 54.15 18477.37 4416.17 22806.29 00:39:49.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13704.57 53.53 18692.40 4382.00 22781.44 00:39:49.014 ======================================================== 00:39:49.014 Total : 27567.94 107.69 18584.26 4382.00 22806.29 00:39:49.014 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 944551 0 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 944551 0 idle 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=944551 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:49.014 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 944551 -w 256 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 944551 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:20.20 reactor_0' 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 944551 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:20.20 reactor_0 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 944551 1 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 944551 1 idle 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=944551 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 944551 -w 256 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 944556 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:09.98 reactor_1' 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 944556 root 20 0 128.2g 48384 34560 S 0.0 0.1 0:09.98 reactor_1 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:49.015 19:00:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:49.273 19:00:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:49.273 19:00:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:39:49.273 19:00:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:49.273 19:00:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:49.273 19:00:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 944551 0 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 944551 0 idle 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=944551 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 944551 -w 256 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 944551 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:20.31 reactor_0' 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 944551 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:20.31 reactor_0 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 944551 1 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 944551 1 idle 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=944551 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 944551 -w 256 00:39:51.813 19:00:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 944556 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:10.02 reactor_1' 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 944556 root 20 0 128.2g 60672 34560 S 0.0 0.1 0:10.02 reactor_1 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:51.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:51.813 rmmod nvme_tcp 00:39:51.813 rmmod nvme_fabrics 00:39:51.813 rmmod nvme_keyring 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 944551 ']' 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 944551 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 944551 ']' 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 944551 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 944551 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 944551' 00:39:51.813 killing process with pid 944551 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 944551 00:39:51.813 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 944551 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:52.073 19:00:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:54.610 19:00:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:54.610 00:39:54.610 real 0m18.672s 00:39:54.610 user 0m37.537s 00:39:54.610 sys 0m6.221s 00:39:54.610 19:00:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:54.610 19:00:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:54.610 ************************************ 00:39:54.610 END TEST nvmf_interrupt 00:39:54.610 ************************************ 00:39:54.610 00:39:54.610 real 32m50.890s 00:39:54.610 user 87m8.164s 00:39:54.610 sys 8m8.846s 00:39:54.610 19:00:40 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:54.610 19:00:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:54.610 ************************************ 00:39:54.610 END TEST nvmf_tcp 00:39:54.610 ************************************ 00:39:54.610 19:00:40 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:39:54.610 19:00:40 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:54.610 19:00:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:54.610 19:00:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:54.610 19:00:40 -- common/autotest_common.sh@10 -- # set +x 00:39:54.610 ************************************ 00:39:54.610 START TEST spdkcli_nvmf_tcp 00:39:54.610 ************************************ 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:39:54.610 * Looking for test storage... 00:39:54.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:54.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.610 --rc genhtml_branch_coverage=1 00:39:54.610 --rc genhtml_function_coverage=1 00:39:54.610 --rc genhtml_legend=1 00:39:54.610 --rc geninfo_all_blocks=1 00:39:54.610 --rc geninfo_unexecuted_blocks=1 00:39:54.610 00:39:54.610 ' 00:39:54.610 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:54.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.610 --rc genhtml_branch_coverage=1 00:39:54.610 --rc genhtml_function_coverage=1 00:39:54.610 --rc genhtml_legend=1 00:39:54.610 --rc geninfo_all_blocks=1 00:39:54.611 --rc geninfo_unexecuted_blocks=1 00:39:54.611 00:39:54.611 ' 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:54.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.611 --rc genhtml_branch_coverage=1 00:39:54.611 --rc genhtml_function_coverage=1 00:39:54.611 --rc genhtml_legend=1 00:39:54.611 --rc geninfo_all_blocks=1 00:39:54.611 --rc geninfo_unexecuted_blocks=1 00:39:54.611 00:39:54.611 ' 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:54.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.611 --rc genhtml_branch_coverage=1 00:39:54.611 --rc genhtml_function_coverage=1 00:39:54.611 --rc genhtml_legend=1 00:39:54.611 --rc geninfo_all_blocks=1 00:39:54.611 --rc geninfo_unexecuted_blocks=1 00:39:54.611 00:39:54.611 ' 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:54.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=946706 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 946706 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 946706 ']' 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:54.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:54.611 19:00:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:54.611 [2024-11-17 19:00:40.905555] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:39:54.611 [2024-11-17 19:00:40.905643] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946706 ] 00:39:54.611 [2024-11-17 19:00:40.975647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:54.611 [2024-11-17 19:00:41.028091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:54.611 [2024-11-17 19:00:41.028094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.611 19:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:54.611 19:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:39:54.611 19:00:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:39:54.611 19:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:54.611 19:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:54.611 19:00:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:39:54.611 19:00:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:39:54.611 19:00:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:39:54.611 19:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:54.611 19:00:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:54.611 19:00:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:54.611 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:54.611 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:39:54.612 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:39:54.612 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:39:54.612 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:39:54.612 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:39:54.612 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:54.612 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:54.612 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:39:54.612 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:39:54.612 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:39:54.612 ' 00:39:57.902 [2024-11-17 19:00:43.845217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:58.836 [2024-11-17 19:00:45.113447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:01.372 [2024-11-17 19:00:47.504769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:03.280 [2024-11-17 19:00:49.514807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:04.659 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:04.659 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:04.659 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:04.659 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:04.659 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:04.659 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:04.659 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:04.659 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:04.659 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:04.659 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:04.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:04.659 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:04.659 19:00:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:04.659 19:00:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:04.659 19:00:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:04.659 19:00:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:04.659 19:00:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:04.659 19:00:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:04.659 19:00:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:04.659 19:00:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:05.225 19:00:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:05.225 19:00:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:05.225 19:00:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:05.225 19:00:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:05.225 19:00:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.225 19:00:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:05.225 19:00:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:05.225 19:00:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.225 19:00:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:05.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:05.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:05.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:05.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:05.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:05.225 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:05.225 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:05.226 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:05.226 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:05.226 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:05.226 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:05.226 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:05.226 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:05.226 ' 00:40:10.500 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:10.500 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:10.500 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:10.500 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:10.500 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:10.500 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:10.500 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:10.500 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:10.500 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:10.500 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:10.500 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:10.500 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:10.500 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:10.500 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:10.500 19:00:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:10.500 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:10.500 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:10.500 19:00:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 946706 00:40:10.500 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 946706 ']' 00:40:10.500 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 946706 00:40:10.500 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:10.500 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:10.500 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946706 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946706' 00:40:10.760 killing process with pid 946706 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 946706 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 946706 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 946706 ']' 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 946706 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 946706 ']' 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 946706 00:40:10.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (946706) - No such process 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 946706 is not found' 00:40:10.760 Process with pid 946706 is not found 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:10.760 00:40:10.760 real 0m16.556s 00:40:10.760 user 0m35.337s 00:40:10.760 sys 0m0.783s 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:10.760 19:00:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:10.760 ************************************ 00:40:10.760 END TEST spdkcli_nvmf_tcp 00:40:10.760 ************************************ 00:40:10.760 19:00:57 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:10.760 19:00:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:10.760 19:00:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:10.760 19:00:57 -- common/autotest_common.sh@10 -- # set +x 00:40:10.760 ************************************ 00:40:10.760 START TEST nvmf_identify_passthru 00:40:10.760 ************************************ 00:40:10.760 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:11.019 * Looking for test storage... 00:40:11.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:11.020 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:11.020 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:40:11.020 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:11.020 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:11.020 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:11.020 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:11.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.020 --rc genhtml_branch_coverage=1 00:40:11.020 --rc genhtml_function_coverage=1 00:40:11.020 --rc genhtml_legend=1 00:40:11.020 --rc geninfo_all_blocks=1 00:40:11.020 --rc geninfo_unexecuted_blocks=1 00:40:11.020 00:40:11.020 ' 00:40:11.020 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:11.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.020 --rc genhtml_branch_coverage=1 00:40:11.020 --rc genhtml_function_coverage=1 00:40:11.020 --rc genhtml_legend=1 00:40:11.020 --rc geninfo_all_blocks=1 00:40:11.020 --rc geninfo_unexecuted_blocks=1 00:40:11.020 00:40:11.020 ' 00:40:11.020 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:11.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.020 --rc genhtml_branch_coverage=1 00:40:11.020 --rc genhtml_function_coverage=1 00:40:11.020 --rc genhtml_legend=1 00:40:11.020 --rc geninfo_all_blocks=1 00:40:11.020 --rc geninfo_unexecuted_blocks=1 00:40:11.020 00:40:11.020 ' 00:40:11.020 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:11.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.020 --rc genhtml_branch_coverage=1 00:40:11.020 --rc genhtml_function_coverage=1 00:40:11.020 --rc genhtml_legend=1 00:40:11.020 --rc geninfo_all_blocks=1 00:40:11.020 --rc geninfo_unexecuted_blocks=1 00:40:11.020 00:40:11.020 ' 00:40:11.020 19:00:57 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:11.020 19:00:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.020 19:00:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.020 19:00:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.020 19:00:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:11.020 19:00:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:11.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:11.020 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:11.020 19:00:57 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:11.020 19:00:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:11.020 19:00:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.020 19:00:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.021 19:00:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.021 19:00:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:11.021 19:00:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.021 19:00:57 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:11.021 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:11.021 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:11.021 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:11.021 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:11.021 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:11.021 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:11.021 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:11.021 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:11.021 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:11.021 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:11.021 19:00:57 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:11.021 19:00:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:13.551 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:13.551 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:13.551 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:13.551 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:13.551 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:13.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:13.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:40:13.552 00:40:13.552 --- 10.0.0.2 ping statistics --- 00:40:13.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.552 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:13.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:13.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:40:13.552 00:40:13.552 --- 10.0.0.1 ping statistics --- 00:40:13.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.552 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:13.552 19:00:59 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:13.552 19:00:59 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:13.552 19:00:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:40:13.552 19:00:59 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:40:13.552 19:00:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:13.552 19:00:59 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:13.552 19:00:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:13.552 19:00:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:13.552 19:00:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:17.748 19:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:17.748 19:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:17.748 19:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:17.748 19:01:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:21.946 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:21.946 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:21.946 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:21.946 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:21.946 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:21.946 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:21.946 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:21.946 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=951272 00:40:21.946 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:21.946 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:21.946 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 951272 00:40:21.946 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 951272 ']' 00:40:21.946 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:21.946 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:21.947 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:21.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:21.947 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:21.947 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:21.947 [2024-11-17 19:01:08.269588] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:40:21.947 [2024-11-17 19:01:08.269697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:21.947 [2024-11-17 19:01:08.346156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:21.947 [2024-11-17 19:01:08.393718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:21.947 [2024-11-17 19:01:08.393772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:21.947 [2024-11-17 19:01:08.393800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:21.947 [2024-11-17 19:01:08.393811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:21.947 [2024-11-17 19:01:08.393822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:21.947 [2024-11-17 19:01:08.395384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:21.947 [2024-11-17 19:01:08.395453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:21.947 [2024-11-17 19:01:08.395529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:21.947 [2024-11-17 19:01:08.395532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:21.947 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:21.947 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:40:21.947 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:21.947 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.947 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:21.947 INFO: Log level set to 20 00:40:21.947 INFO: Requests: 00:40:21.947 { 00:40:21.947 "jsonrpc": "2.0", 00:40:21.947 "method": "nvmf_set_config", 00:40:21.947 "id": 1, 00:40:21.947 "params": { 00:40:21.947 "admin_cmd_passthru": { 00:40:21.947 "identify_ctrlr": true 00:40:21.947 } 00:40:21.947 } 00:40:21.947 } 00:40:21.947 00:40:22.205 INFO: response: 00:40:22.205 { 00:40:22.205 "jsonrpc": "2.0", 00:40:22.205 "id": 1, 00:40:22.205 "result": true 00:40:22.205 } 00:40:22.205 00:40:22.205 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.205 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:22.205 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.205 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.205 INFO: Setting log level to 20 00:40:22.205 INFO: Setting log level to 20 00:40:22.205 INFO: Log level set to 20 00:40:22.205 INFO: Log level set to 20 00:40:22.205 INFO: Requests: 00:40:22.205 { 00:40:22.205 "jsonrpc": "2.0", 00:40:22.205 "method": "framework_start_init", 00:40:22.205 "id": 1 00:40:22.205 } 00:40:22.205 00:40:22.205 INFO: Requests: 00:40:22.205 { 00:40:22.205 "jsonrpc": "2.0", 00:40:22.205 "method": "framework_start_init", 00:40:22.205 "id": 1 00:40:22.205 } 00:40:22.205 00:40:22.205 [2024-11-17 19:01:08.612129] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:22.205 INFO: response: 00:40:22.205 { 00:40:22.205 "jsonrpc": "2.0", 00:40:22.205 "id": 1, 00:40:22.205 "result": true 00:40:22.205 } 00:40:22.205 00:40:22.205 INFO: response: 00:40:22.205 { 00:40:22.205 "jsonrpc": "2.0", 00:40:22.205 "id": 1, 00:40:22.205 "result": true 00:40:22.205 } 00:40:22.205 00:40:22.205 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.205 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:22.205 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.205 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.205 INFO: Setting log level to 40 00:40:22.205 INFO: Setting log level to 40 00:40:22.205 INFO: Setting log level to 40 00:40:22.205 [2024-11-17 19:01:08.622076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:22.205 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.205 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:22.205 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:22.205 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:22.205 19:01:08 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:22.205 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.205 19:01:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.492 Nvme0n1 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.492 [2024-11-17 19:01:11.524491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.492 [ 00:40:25.492 { 00:40:25.492 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:25.492 "subtype": "Discovery", 00:40:25.492 "listen_addresses": [], 00:40:25.492 "allow_any_host": true, 00:40:25.492 "hosts": [] 00:40:25.492 }, 00:40:25.492 { 00:40:25.492 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:25.492 "subtype": "NVMe", 00:40:25.492 "listen_addresses": [ 00:40:25.492 { 00:40:25.492 "trtype": "TCP", 00:40:25.492 "adrfam": "IPv4", 00:40:25.492 "traddr": "10.0.0.2", 00:40:25.492 "trsvcid": "4420" 00:40:25.492 } 00:40:25.492 ], 00:40:25.492 "allow_any_host": true, 00:40:25.492 "hosts": [], 00:40:25.492 "serial_number": "SPDK00000000000001", 00:40:25.492 "model_number": "SPDK bdev Controller", 00:40:25.492 "max_namespaces": 1, 00:40:25.492 "min_cntlid": 1, 00:40:25.492 "max_cntlid": 65519, 00:40:25.492 "namespaces": [ 00:40:25.492 { 00:40:25.492 "nsid": 1, 00:40:25.492 "bdev_name": "Nvme0n1", 00:40:25.492 "name": "Nvme0n1", 00:40:25.492 "nguid": "24C9DB72F46A4832B8E786C975FA2070", 00:40:25.492 "uuid": "24c9db72-f46a-4832-b8e7-86c975fa2070" 00:40:25.492 } 00:40:25.492 ] 00:40:25.492 } 00:40:25.492 ] 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:25.492 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:25.492 19:01:11 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:25.492 19:01:11 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:25.492 19:01:11 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:25.492 19:01:11 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:25.492 19:01:11 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:25.492 19:01:11 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:25.492 19:01:11 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:25.492 rmmod nvme_tcp 00:40:25.492 rmmod nvme_fabrics 00:40:25.492 rmmod nvme_keyring 00:40:25.492 19:01:11 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:25.492 19:01:11 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:25.492 19:01:11 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:25.492 19:01:11 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 951272 ']' 00:40:25.492 19:01:11 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 951272 00:40:25.493 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 951272 ']' 00:40:25.493 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 951272 00:40:25.493 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:40:25.493 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:25.493 19:01:11 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 951272 00:40:25.493 19:01:12 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:25.493 19:01:12 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:25.493 19:01:12 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 951272' 00:40:25.493 killing process with pid 951272 00:40:25.493 19:01:12 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 951272 00:40:25.493 19:01:12 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 951272 00:40:27.395 19:01:13 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:27.395 19:01:13 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:27.395 19:01:13 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:27.395 19:01:13 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:27.395 19:01:13 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:40:27.395 19:01:13 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:27.395 19:01:13 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:40:27.395 19:01:13 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:27.395 19:01:13 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:27.395 19:01:13 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:27.395 19:01:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:27.395 19:01:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.305 19:01:15 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:29.305 00:40:29.305 real 0m18.257s 00:40:29.305 user 0m27.217s 00:40:29.305 sys 0m2.446s 00:40:29.305 19:01:15 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.305 19:01:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.305 ************************************ 00:40:29.305 END TEST nvmf_identify_passthru 00:40:29.305 ************************************ 00:40:29.305 19:01:15 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:29.305 19:01:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:29.305 19:01:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.305 19:01:15 -- common/autotest_common.sh@10 -- # set +x 00:40:29.305 ************************************ 00:40:29.305 START TEST nvmf_dif 00:40:29.305 ************************************ 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:29.306 * Looking for test storage... 00:40:29.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:29.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.306 --rc genhtml_branch_coverage=1 00:40:29.306 --rc genhtml_function_coverage=1 00:40:29.306 --rc genhtml_legend=1 00:40:29.306 --rc geninfo_all_blocks=1 00:40:29.306 --rc geninfo_unexecuted_blocks=1 00:40:29.306 00:40:29.306 ' 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:29.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.306 --rc genhtml_branch_coverage=1 00:40:29.306 --rc genhtml_function_coverage=1 00:40:29.306 --rc genhtml_legend=1 00:40:29.306 --rc geninfo_all_blocks=1 00:40:29.306 --rc geninfo_unexecuted_blocks=1 00:40:29.306 00:40:29.306 ' 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:29.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.306 --rc genhtml_branch_coverage=1 00:40:29.306 --rc genhtml_function_coverage=1 00:40:29.306 --rc genhtml_legend=1 00:40:29.306 --rc geninfo_all_blocks=1 00:40:29.306 --rc geninfo_unexecuted_blocks=1 00:40:29.306 00:40:29.306 ' 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:29.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.306 --rc genhtml_branch_coverage=1 00:40:29.306 --rc genhtml_function_coverage=1 00:40:29.306 --rc genhtml_legend=1 00:40:29.306 --rc geninfo_all_blocks=1 00:40:29.306 --rc geninfo_unexecuted_blocks=1 00:40:29.306 00:40:29.306 ' 00:40:29.306 19:01:15 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:29.306 19:01:15 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:29.306 19:01:15 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.306 19:01:15 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.306 19:01:15 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.306 19:01:15 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:29.306 19:01:15 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:29.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:29.306 19:01:15 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:29.306 19:01:15 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:29.306 19:01:15 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:29.306 19:01:15 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:29.306 19:01:15 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:29.306 19:01:15 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:29.306 19:01:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:31.990 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:31.990 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:31.990 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:31.990 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:40:31.990 19:01:17 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:31.991 19:01:17 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:31.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:31.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:40:31.991 00:40:31.991 --- 10.0.0.2 ping statistics --- 00:40:31.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.991 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:31.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:31.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:40:31.991 00:40:31.991 --- 10.0.0.1 ping statistics --- 00:40:31.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.991 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:31.991 19:01:18 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:32.930 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:32.930 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:32.930 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:32.930 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:32.930 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:32.930 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:32.930 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:32.930 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:32.930 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:32.930 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:32.930 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:32.930 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:32.930 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:32.930 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:32.930 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:32.930 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:32.930 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:32.930 19:01:19 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:32.930 19:01:19 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:32.930 19:01:19 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:32.930 19:01:19 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:32.930 19:01:19 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:32.930 19:01:19 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:32.930 19:01:19 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:32.930 19:01:19 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:32.930 19:01:19 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:32.930 19:01:19 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:32.930 19:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:32.930 19:01:19 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=954496 00:40:32.930 19:01:19 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:32.930 19:01:19 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 954496 00:40:32.930 19:01:19 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 954496 ']' 00:40:32.930 19:01:19 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:32.930 19:01:19 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:32.930 19:01:19 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:32.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:32.930 19:01:19 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:32.930 19:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.188 [2024-11-17 19:01:19.516373] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:40:33.188 [2024-11-17 19:01:19.516449] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:33.188 [2024-11-17 19:01:19.588086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.188 [2024-11-17 19:01:19.632459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:33.188 [2024-11-17 19:01:19.632517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:33.188 [2024-11-17 19:01:19.632546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:33.188 [2024-11-17 19:01:19.632557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:33.188 [2024-11-17 19:01:19.632567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:33.188 [2024-11-17 19:01:19.633154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.188 19:01:19 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:33.188 19:01:19 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:40:33.188 19:01:19 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:33.188 19:01:19 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:33.188 19:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.447 19:01:19 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:33.447 19:01:19 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:33.447 19:01:19 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:33.447 19:01:19 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.447 19:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.447 [2024-11-17 19:01:19.771299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:33.447 19:01:19 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.447 19:01:19 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:33.447 19:01:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:33.447 19:01:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:33.447 19:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:33.447 ************************************ 00:40:33.447 START TEST fio_dif_1_default 00:40:33.447 ************************************ 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:33.447 bdev_null0 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:33.447 [2024-11-17 19:01:19.827560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:33.447 19:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:33.447 { 00:40:33.447 "params": { 00:40:33.447 "name": "Nvme$subsystem", 00:40:33.447 "trtype": "$TEST_TRANSPORT", 00:40:33.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:33.447 "adrfam": "ipv4", 00:40:33.447 "trsvcid": "$NVMF_PORT", 00:40:33.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:33.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:33.447 "hdgst": ${hdgst:-false}, 00:40:33.447 "ddgst": ${ddgst:-false} 00:40:33.447 }, 00:40:33.447 "method": "bdev_nvme_attach_controller" 00:40:33.447 } 00:40:33.447 EOF 00:40:33.447 )") 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:33.448 "params": { 00:40:33.448 "name": "Nvme0", 00:40:33.448 "trtype": "tcp", 00:40:33.448 "traddr": "10.0.0.2", 00:40:33.448 "adrfam": "ipv4", 00:40:33.448 "trsvcid": "4420", 00:40:33.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:33.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:33.448 "hdgst": false, 00:40:33.448 "ddgst": false 00:40:33.448 }, 00:40:33.448 "method": "bdev_nvme_attach_controller" 00:40:33.448 }' 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:33.448 19:01:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:33.706 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:33.706 fio-3.35 00:40:33.706 Starting 1 thread 00:40:45.924 00:40:45.924 filename0: (groupid=0, jobs=1): err= 0: pid=954722: Sun Nov 17 19:01:30 2024 00:40:45.924 read: IOPS=211, BW=845KiB/s (866kB/s)(8480KiB/10032msec) 00:40:45.924 slat (nsec): min=4081, max=69507, avg=9052.48, stdev=2556.18 00:40:45.924 clat (usec): min=534, max=46923, avg=18900.24, stdev=20251.42 00:40:45.924 lat (usec): min=542, max=46952, avg=18909.29, stdev=20251.35 00:40:45.924 clat percentiles (usec): 00:40:45.924 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 594], 20.00th=[ 619], 00:40:45.924 | 30.00th=[ 627], 40.00th=[ 635], 50.00th=[ 652], 60.00th=[41157], 00:40:45.924 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:45.924 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:40:45.924 | 99.99th=[46924] 00:40:45.924 bw ( KiB/s): min= 768, max= 1024, per=100.00%, avg=846.40, stdev=70.78, samples=20 00:40:45.924 iops : min= 192, max= 256, avg=211.60, stdev=17.69, samples=20 00:40:45.924 lat (usec) : 750=55.09% 00:40:45.924 lat (msec) : 50=44.91% 00:40:45.924 cpu : usr=90.42%, sys=9.30%, ctx=14, majf=0, minf=250 00:40:45.924 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:45.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:45.924 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:45.924 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:45.924 00:40:45.924 Run status group 0 (all jobs): 00:40:45.924 READ: bw=845KiB/s (866kB/s), 845KiB/s-845KiB/s (866kB/s-866kB/s), io=8480KiB (8684kB), run=10032-10032msec 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.924 00:40:45.924 real 0m11.232s 00:40:45.924 user 0m10.331s 00:40:45.924 sys 0m1.249s 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:45.924 ************************************ 00:40:45.924 END TEST fio_dif_1_default 00:40:45.924 ************************************ 00:40:45.924 19:01:31 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:45.924 19:01:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:45.924 19:01:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:45.924 19:01:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:45.924 ************************************ 00:40:45.924 START TEST fio_dif_1_multi_subsystems 00:40:45.924 ************************************ 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:45.924 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:45.925 bdev_null0 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:45.925 [2024-11-17 19:01:31.106152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:45.925 bdev_null1 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:45.925 { 00:40:45.925 "params": { 00:40:45.925 "name": "Nvme$subsystem", 00:40:45.925 "trtype": "$TEST_TRANSPORT", 00:40:45.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:45.925 "adrfam": "ipv4", 00:40:45.925 "trsvcid": "$NVMF_PORT", 00:40:45.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:45.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:45.925 "hdgst": ${hdgst:-false}, 00:40:45.925 "ddgst": ${ddgst:-false} 00:40:45.925 }, 00:40:45.925 "method": "bdev_nvme_attach_controller" 00:40:45.925 } 00:40:45.925 EOF 00:40:45.925 )") 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:45.925 { 00:40:45.925 "params": { 00:40:45.925 "name": "Nvme$subsystem", 00:40:45.925 "trtype": "$TEST_TRANSPORT", 00:40:45.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:45.925 "adrfam": "ipv4", 00:40:45.925 "trsvcid": "$NVMF_PORT", 00:40:45.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:45.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:45.925 "hdgst": ${hdgst:-false}, 00:40:45.925 "ddgst": ${ddgst:-false} 00:40:45.925 }, 00:40:45.925 "method": "bdev_nvme_attach_controller" 00:40:45.925 } 00:40:45.925 EOF 00:40:45.925 )") 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:45.925 "params": { 00:40:45.925 "name": "Nvme0", 00:40:45.925 "trtype": "tcp", 00:40:45.925 "traddr": "10.0.0.2", 00:40:45.925 "adrfam": "ipv4", 00:40:45.925 "trsvcid": "4420", 00:40:45.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:45.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:45.925 "hdgst": false, 00:40:45.925 "ddgst": false 00:40:45.925 }, 00:40:45.925 "method": "bdev_nvme_attach_controller" 00:40:45.925 },{ 00:40:45.925 "params": { 00:40:45.925 "name": "Nvme1", 00:40:45.925 "trtype": "tcp", 00:40:45.925 "traddr": "10.0.0.2", 00:40:45.925 "adrfam": "ipv4", 00:40:45.925 "trsvcid": "4420", 00:40:45.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:45.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:45.925 "hdgst": false, 00:40:45.925 "ddgst": false 00:40:45.925 }, 00:40:45.925 "method": "bdev_nvme_attach_controller" 00:40:45.925 }' 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:45.925 19:01:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:45.926 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:45.926 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:45.926 fio-3.35 00:40:45.926 Starting 2 threads 00:40:55.910 00:40:55.910 filename0: (groupid=0, jobs=1): err= 0: pid=956121: Sun Nov 17 19:01:42 2024 00:40:55.910 read: IOPS=184, BW=738KiB/s (756kB/s)(7408KiB/10032msec) 00:40:55.910 slat (nsec): min=7157, max=26031, avg=9373.09, stdev=2297.58 00:40:55.910 clat (usec): min=480, max=42484, avg=21636.72, stdev=20442.13 00:40:55.910 lat (usec): min=488, max=42496, avg=21646.09, stdev=20442.04 00:40:55.910 clat percentiles (usec): 00:40:55.910 | 1.00th=[ 515], 5.00th=[ 562], 10.00th=[ 578], 20.00th=[ 603], 00:40:55.910 | 30.00th=[ 611], 40.00th=[ 627], 50.00th=[41157], 60.00th=[41157], 00:40:55.910 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:40:55.910 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:40:55.910 | 99.99th=[42730] 00:40:55.910 bw ( KiB/s): min= 576, max= 768, per=65.45%, avg=739.20, stdev=52.84, samples=20 00:40:55.910 iops : min= 144, max= 192, avg=184.80, stdev=13.21, samples=20 00:40:55.910 lat (usec) : 500=0.38%, 750=47.52%, 1000=0.49% 00:40:55.910 lat (msec) : 10=0.22%, 50=51.40% 00:40:55.910 cpu : usr=94.42%, sys=5.07%, ctx=49, majf=0, minf=153 00:40:55.910 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:55.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:55.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:55.910 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:55.910 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:55.910 filename1: (groupid=0, jobs=1): err= 0: pid=956122: Sun Nov 17 19:01:42 2024 00:40:55.910 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10009msec) 00:40:55.910 slat (nsec): min=6177, max=26780, avg=9532.10, stdev=2519.97 00:40:55.910 clat (usec): min=590, max=44236, avg=40820.37, stdev=2584.67 00:40:55.910 lat (usec): min=599, max=44251, avg=40829.91, stdev=2584.67 00:40:55.910 clat percentiles (usec): 00:40:55.910 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:55.910 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:55.910 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:55.910 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:40:55.910 | 99.99th=[44303] 00:40:55.910 bw ( KiB/s): min= 384, max= 416, per=34.54%, avg=390.40, stdev=13.13, samples=20 00:40:55.910 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:40:55.910 lat (usec) : 750=0.41% 00:40:55.910 lat (msec) : 50=99.59% 00:40:55.910 cpu : usr=95.11%, sys=4.60%, ctx=12, majf=0, minf=126 00:40:55.910 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:55.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:55.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:55.910 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:55.910 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:55.910 00:40:55.910 Run status group 0 (all jobs): 00:40:55.910 READ: bw=1129KiB/s (1156kB/s), 392KiB/s-738KiB/s (401kB/s-756kB/s), io=11.1MiB (11.6MB), run=10009-10032msec 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:40:55.910 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:55.911 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.911 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:55.911 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.911 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:55.911 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.911 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:55.911 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.911 00:40:55.911 real 0m11.238s 00:40:55.911 user 0m20.246s 00:40:55.911 sys 0m1.246s 00:40:55.911 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:55.911 19:01:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:55.911 ************************************ 00:40:55.911 END TEST fio_dif_1_multi_subsystems 00:40:55.911 ************************************ 00:40:55.911 19:01:42 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:40:55.911 19:01:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:55.911 19:01:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:55.911 19:01:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:55.911 ************************************ 00:40:55.911 START TEST fio_dif_rand_params 00:40:55.911 ************************************ 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:55.911 bdev_null0 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:55.911 [2024-11-17 19:01:42.391046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:55.911 { 00:40:55.911 "params": { 00:40:55.911 "name": "Nvme$subsystem", 00:40:55.911 "trtype": "$TEST_TRANSPORT", 00:40:55.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:55.911 "adrfam": "ipv4", 00:40:55.911 "trsvcid": "$NVMF_PORT", 00:40:55.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:55.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:55.911 "hdgst": ${hdgst:-false}, 00:40:55.911 "ddgst": ${ddgst:-false} 00:40:55.911 }, 00:40:55.911 "method": "bdev_nvme_attach_controller" 00:40:55.911 } 00:40:55.911 EOF 00:40:55.911 )") 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:55.911 "params": { 00:40:55.911 "name": "Nvme0", 00:40:55.911 "trtype": "tcp", 00:40:55.911 "traddr": "10.0.0.2", 00:40:55.911 "adrfam": "ipv4", 00:40:55.911 "trsvcid": "4420", 00:40:55.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:55.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:55.911 "hdgst": false, 00:40:55.911 "ddgst": false 00:40:55.911 }, 00:40:55.911 "method": "bdev_nvme_attach_controller" 00:40:55.911 }' 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:55.911 19:01:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:56.171 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:56.171 ... 00:40:56.171 fio-3.35 00:40:56.171 Starting 3 threads 00:41:02.739 00:41:02.739 filename0: (groupid=0, jobs=1): err= 0: pid=957517: Sun Nov 17 19:01:48 2024 00:41:02.739 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(147MiB/5044msec) 00:41:02.739 slat (nsec): min=7144, max=63680, avg=18124.59, stdev=4567.92 00:41:02.739 clat (usec): min=4992, max=54232, avg=12819.90, stdev=5556.88 00:41:02.739 lat (usec): min=5013, max=54252, avg=12838.02, stdev=5556.84 00:41:02.739 clat percentiles (usec): 00:41:02.739 | 1.00th=[ 8356], 5.00th=[10028], 10.00th=[10421], 20.00th=[10945], 00:41:02.739 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:41:02.739 | 70.00th=[12780], 80.00th=[13173], 90.00th=[14091], 95.00th=[15008], 00:41:02.739 | 99.00th=[51119], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:41:02.739 | 99.99th=[54264] 00:41:02.739 bw ( KiB/s): min=22784, max=32256, per=34.38%, avg=30028.80, stdev=3076.86, samples=10 00:41:02.739 iops : min= 178, max= 252, avg=234.60, stdev=24.04, samples=10 00:41:02.739 lat (msec) : 10=4.43%, 20=93.62%, 50=0.68%, 100=1.28% 00:41:02.739 cpu : usr=94.59%, sys=4.92%, ctx=11, majf=0, minf=80 00:41:02.739 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.739 issued rwts: total=1175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.739 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:02.739 filename0: (groupid=0, jobs=1): err= 0: pid=957518: Sun Nov 17 19:01:48 2024 00:41:02.739 read: IOPS=224, BW=28.1MiB/s (29.4MB/s)(142MiB/5045msec) 00:41:02.739 slat (nsec): min=4351, max=49983, avg=17878.27, stdev=4797.66 00:41:02.739 clat (usec): min=4660, max=55171, avg=13296.68, stdev=4994.41 00:41:02.739 lat (usec): min=4674, max=55195, avg=13314.56, stdev=4994.21 00:41:02.739 clat percentiles (usec): 00:41:02.739 | 1.00th=[ 5080], 5.00th=[ 9241], 10.00th=[10683], 20.00th=[11338], 00:41:02.739 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12780], 60.00th=[13173], 00:41:02.739 | 70.00th=[13698], 80.00th=[14484], 90.00th=[15401], 95.00th=[15926], 00:41:02.739 | 99.00th=[47449], 99.50th=[52691], 99.90th=[55313], 99.95th=[55313], 00:41:02.739 | 99.99th=[55313] 00:41:02.740 bw ( KiB/s): min=20736, max=30720, per=33.15%, avg=28953.60, stdev=2947.28, samples=10 00:41:02.740 iops : min= 162, max= 240, avg=226.20, stdev=23.03, samples=10 00:41:02.740 lat (msec) : 10=5.65%, 20=92.85%, 50=0.62%, 100=0.88% 00:41:02.740 cpu : usr=93.28%, sys=5.27%, ctx=184, majf=0, minf=92 00:41:02.740 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.740 issued rwts: total=1133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.740 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:02.740 filename0: (groupid=0, jobs=1): err= 0: pid=957519: Sun Nov 17 19:01:48 2024 00:41:02.740 read: IOPS=224, BW=28.1MiB/s (29.5MB/s)(142MiB/5045msec) 00:41:02.740 slat (usec): min=4, max=104, avg=17.54, stdev= 6.39 00:41:02.740 clat (usec): min=5149, max=53138, avg=13285.65, stdev=4144.42 00:41:02.740 lat (usec): min=5157, max=53164, avg=13303.19, stdev=4144.50 00:41:02.740 clat percentiles (usec): 00:41:02.740 | 1.00th=[ 5669], 5.00th=[ 8848], 10.00th=[10421], 20.00th=[11469], 00:41:02.740 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12911], 60.00th=[13566], 00:41:02.740 | 70.00th=[14222], 80.00th=[15008], 90.00th=[15795], 95.00th=[16319], 00:41:02.740 | 99.00th=[18220], 99.50th=[46924], 99.90th=[53216], 99.95th=[53216], 00:41:02.740 | 99.99th=[53216] 00:41:02.740 bw ( KiB/s): min=27648, max=31744, per=33.16%, avg=28959.20, stdev=1182.49, samples=10 00:41:02.740 iops : min= 216, max= 248, avg=226.20, stdev= 9.26, samples=10 00:41:02.740 lat (msec) : 10=8.29%, 20=90.74%, 50=0.62%, 100=0.35% 00:41:02.740 cpu : usr=87.21%, sys=7.77%, ctx=309, majf=0, minf=158 00:41:02.740 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:02.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:02.740 issued rwts: total=1134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:02.740 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:02.740 00:41:02.740 Run status group 0 (all jobs): 00:41:02.740 READ: bw=85.3MiB/s (89.4MB/s), 28.1MiB/s-29.1MiB/s (29.4MB/s-30.5MB/s), io=430MiB (451MB), run=5044-5045msec 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 bdev_null0 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 [2024-11-17 19:01:48.568765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 bdev_null1 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 bdev_null2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:02.740 { 00:41:02.740 "params": { 00:41:02.740 "name": "Nvme$subsystem", 00:41:02.740 "trtype": "$TEST_TRANSPORT", 00:41:02.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:02.740 "adrfam": "ipv4", 00:41:02.740 "trsvcid": "$NVMF_PORT", 00:41:02.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:02.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:02.740 "hdgst": ${hdgst:-false}, 00:41:02.740 "ddgst": ${ddgst:-false} 00:41:02.740 }, 00:41:02.740 "method": "bdev_nvme_attach_controller" 00:41:02.740 } 00:41:02.740 EOF 00:41:02.740 )") 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:02.740 { 00:41:02.740 "params": { 00:41:02.740 "name": "Nvme$subsystem", 00:41:02.740 "trtype": "$TEST_TRANSPORT", 00:41:02.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:02.740 "adrfam": "ipv4", 00:41:02.740 "trsvcid": "$NVMF_PORT", 00:41:02.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:02.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:02.740 "hdgst": ${hdgst:-false}, 00:41:02.740 "ddgst": ${ddgst:-false} 00:41:02.740 }, 00:41:02.740 "method": "bdev_nvme_attach_controller" 00:41:02.740 } 00:41:02.740 EOF 00:41:02.740 )") 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:02.740 { 00:41:02.740 "params": { 00:41:02.740 "name": "Nvme$subsystem", 00:41:02.740 "trtype": "$TEST_TRANSPORT", 00:41:02.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:02.740 "adrfam": "ipv4", 00:41:02.740 "trsvcid": "$NVMF_PORT", 00:41:02.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:02.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:02.740 "hdgst": ${hdgst:-false}, 00:41:02.740 "ddgst": ${ddgst:-false} 00:41:02.740 }, 00:41:02.740 "method": "bdev_nvme_attach_controller" 00:41:02.740 } 00:41:02.740 EOF 00:41:02.740 )") 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:02.740 19:01:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:02.740 "params": { 00:41:02.740 "name": "Nvme0", 00:41:02.740 "trtype": "tcp", 00:41:02.740 "traddr": "10.0.0.2", 00:41:02.740 "adrfam": "ipv4", 00:41:02.740 "trsvcid": "4420", 00:41:02.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:02.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:02.740 "hdgst": false, 00:41:02.740 "ddgst": false 00:41:02.740 }, 00:41:02.740 "method": "bdev_nvme_attach_controller" 00:41:02.740 },{ 00:41:02.740 "params": { 00:41:02.740 "name": "Nvme1", 00:41:02.740 "trtype": "tcp", 00:41:02.740 "traddr": "10.0.0.2", 00:41:02.740 "adrfam": "ipv4", 00:41:02.740 "trsvcid": "4420", 00:41:02.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:02.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:02.740 "hdgst": false, 00:41:02.740 "ddgst": false 00:41:02.740 }, 00:41:02.740 "method": "bdev_nvme_attach_controller" 00:41:02.740 },{ 00:41:02.740 "params": { 00:41:02.740 "name": "Nvme2", 00:41:02.740 "trtype": "tcp", 00:41:02.740 "traddr": "10.0.0.2", 00:41:02.740 "adrfam": "ipv4", 00:41:02.740 "trsvcid": "4420", 00:41:02.740 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:02.740 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:02.741 "hdgst": false, 00:41:02.741 "ddgst": false 00:41:02.741 }, 00:41:02.741 "method": "bdev_nvme_attach_controller" 00:41:02.741 }' 00:41:02.741 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:02.741 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:02.741 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:02.741 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:02.741 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:02.741 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:02.741 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:02.741 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:02.741 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:02.741 19:01:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:02.741 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:02.741 ... 00:41:02.741 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:02.741 ... 00:41:02.741 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:02.741 ... 00:41:02.741 fio-3.35 00:41:02.741 Starting 24 threads 00:41:14.967 00:41:14.967 filename0: (groupid=0, jobs=1): err= 0: pid=958372: Sun Nov 17 19:01:59 2024 00:41:14.967 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:41:14.967 slat (nsec): min=8062, max=88744, avg=34255.46, stdev=13737.15 00:41:14.967 clat (usec): min=26266, max=43982, avg=33641.03, stdev=2050.27 00:41:14.967 lat (usec): min=26277, max=43999, avg=33675.29, stdev=2049.72 00:41:14.967 clat percentiles (usec): 00:41:14.967 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:41:14.967 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:14.967 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:41:14.967 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.967 | 99.99th=[43779] 00:41:14.967 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1899.79, stdev=47.95, samples=19 00:41:14.967 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:41:14.967 lat (msec) : 50=100.00% 00:41:14.967 cpu : usr=98.19%, sys=1.37%, ctx=16, majf=0, minf=9 00:41:14.967 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.967 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.967 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.967 filename0: (groupid=0, jobs=1): err= 0: pid=958373: Sun Nov 17 19:01:59 2024 00:41:14.967 read: IOPS=471, BW=1884KiB/s (1929kB/s)(18.4MiB/10021msec) 00:41:14.967 slat (usec): min=9, max=115, avg=43.69, stdev=17.84 00:41:14.967 clat (usec): min=15660, max=50073, avg=33566.72, stdev=2410.73 00:41:14.967 lat (usec): min=15713, max=50092, avg=33610.41, stdev=2408.74 00:41:14.967 clat percentiles (usec): 00:41:14.967 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:14.967 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.967 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[38536], 00:41:14.967 | 99.00th=[43254], 99.50th=[43779], 99.90th=[49546], 99.95th=[49546], 00:41:14.967 | 99.99th=[50070] 00:41:14.967 bw ( KiB/s): min= 1631, max= 1920, per=4.14%, avg=1879.95, stdev=78.46, samples=20 00:41:14.967 iops : min= 407, max= 480, avg=469.95, stdev=19.74, samples=20 00:41:14.967 lat (msec) : 20=0.13%, 50=99.83%, 100=0.04% 00:41:14.967 cpu : usr=98.37%, sys=1.16%, ctx=201, majf=0, minf=9 00:41:14.967 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:41:14.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.967 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.967 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.967 filename0: (groupid=0, jobs=1): err= 0: pid=958374: Sun Nov 17 19:01:59 2024 00:41:14.967 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10018msec) 00:41:14.967 slat (nsec): min=7883, max=83656, avg=16722.89, stdev=13207.42 00:41:14.967 clat (usec): min=23328, max=44728, avg=33675.24, stdev=2195.46 00:41:14.967 lat (usec): min=23366, max=44758, avg=33691.97, stdev=2195.15 00:41:14.967 clat percentiles (usec): 00:41:14.967 | 1.00th=[26870], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:41:14.967 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:14.967 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:41:14.967 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:41:14.967 | 99.99th=[44827] 00:41:14.968 bw ( KiB/s): min= 1664, max= 2048, per=4.16%, avg=1888.00, stdev=81.75, samples=20 00:41:14.968 iops : min= 416, max= 512, avg=472.00, stdev=20.44, samples=20 00:41:14.968 lat (msec) : 50=100.00% 00:41:14.968 cpu : usr=97.78%, sys=1.47%, ctx=164, majf=0, minf=9 00:41:14.968 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.968 filename0: (groupid=0, jobs=1): err= 0: pid=958375: Sun Nov 17 19:01:59 2024 00:41:14.968 read: IOPS=471, BW=1885KiB/s (1931kB/s)(18.4MiB/10014msec) 00:41:14.968 slat (usec): min=10, max=104, avg=46.23, stdev=14.47 00:41:14.968 clat (usec): min=21543, max=44003, avg=33524.65, stdev=2146.14 00:41:14.968 lat (usec): min=21582, max=44038, avg=33570.88, stdev=2145.82 00:41:14.968 clat percentiles (usec): 00:41:14.968 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:14.968 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.968 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[39060], 00:41:14.968 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.968 | 99.99th=[43779] 00:41:14.968 bw ( KiB/s): min= 1644, max= 2048, per=4.15%, avg=1880.60, stdev=86.89, samples=20 00:41:14.968 iops : min= 411, max= 512, avg=470.15, stdev=21.72, samples=20 00:41:14.968 lat (msec) : 50=100.00% 00:41:14.968 cpu : usr=97.57%, sys=1.62%, ctx=109, majf=0, minf=9 00:41:14.968 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:14.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.968 filename0: (groupid=0, jobs=1): err= 0: pid=958376: Sun Nov 17 19:01:59 2024 00:41:14.968 read: IOPS=471, BW=1886KiB/s (1932kB/s)(18.4MiB/10009msec) 00:41:14.968 slat (usec): min=4, max=106, avg=39.61, stdev=16.08 00:41:14.968 clat (usec): min=15795, max=55474, avg=33581.69, stdev=2201.28 00:41:14.968 lat (usec): min=15804, max=55502, avg=33621.30, stdev=2200.48 00:41:14.968 clat percentiles (usec): 00:41:14.968 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:41:14.968 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.968 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:41:14.968 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.968 | 99.99th=[55313] 00:41:14.968 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1893.05, stdev=68.52, samples=19 00:41:14.968 iops : min= 448, max= 512, avg=473.26, stdev=17.13, samples=19 00:41:14.968 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:41:14.968 cpu : usr=98.27%, sys=1.33%, ctx=16, majf=0, minf=9 00:41:14.968 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.968 filename0: (groupid=0, jobs=1): err= 0: pid=958377: Sun Nov 17 19:01:59 2024 00:41:14.968 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10007msec) 00:41:14.968 slat (usec): min=8, max=114, avg=42.84, stdev=17.97 00:41:14.968 clat (usec): min=7697, max=65824, avg=33585.54, stdev=3622.19 00:41:14.968 lat (usec): min=7728, max=65863, avg=33628.38, stdev=3622.51 00:41:14.968 clat percentiles (usec): 00:41:14.968 | 1.00th=[25035], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:14.968 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.968 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[40109], 00:41:14.968 | 99.00th=[43254], 99.50th=[57410], 99.90th=[65799], 99.95th=[65799], 00:41:14.968 | 99.99th=[65799] 00:41:14.968 bw ( KiB/s): min= 1651, max= 1936, per=4.15%, avg=1880.15, stdev=85.90, samples=20 00:41:14.968 iops : min= 412, max= 484, avg=470.00, stdev=21.58, samples=20 00:41:14.968 lat (msec) : 10=0.30%, 20=0.17%, 50=98.81%, 100=0.72% 00:41:14.968 cpu : usr=98.45%, sys=1.13%, ctx=19, majf=0, minf=9 00:41:14.968 IO depths : 1=4.1%, 2=10.3%, 4=24.8%, 8=52.5%, 16=8.4%, 32=0.0%, >=64=0.0% 00:41:14.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 issued rwts: total=4714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.968 filename0: (groupid=0, jobs=1): err= 0: pid=958378: Sun Nov 17 19:01:59 2024 00:41:14.968 read: IOPS=471, BW=1886KiB/s (1932kB/s)(18.4MiB/10008msec) 00:41:14.968 slat (usec): min=9, max=114, avg=42.15, stdev=14.53 00:41:14.968 clat (usec): min=7598, max=62251, avg=33519.81, stdev=3065.11 00:41:14.968 lat (usec): min=7627, max=62284, avg=33561.96, stdev=3064.98 00:41:14.968 clat percentiles (usec): 00:41:14.968 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:14.968 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.968 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[39060], 00:41:14.968 | 99.00th=[43254], 99.50th=[43779], 99.90th=[62129], 99.95th=[62129], 00:41:14.968 | 99.99th=[62129] 00:41:14.968 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.60, stdev=84.09, samples=20 00:41:14.968 iops : min= 416, max= 480, avg=470.40, stdev=21.02, samples=20 00:41:14.968 lat (msec) : 10=0.34%, 50=99.32%, 100=0.34% 00:41:14.968 cpu : usr=96.40%, sys=2.21%, ctx=248, majf=0, minf=9 00:41:14.968 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:14.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.968 filename0: (groupid=0, jobs=1): err= 0: pid=958379: Sun Nov 17 19:01:59 2024 00:41:14.968 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:41:14.968 slat (usec): min=8, max=133, avg=41.34, stdev=19.00 00:41:14.968 clat (usec): min=26557, max=44019, avg=33571.26, stdev=2086.21 00:41:14.968 lat (usec): min=26568, max=44040, avg=33612.60, stdev=2083.52 00:41:14.968 clat percentiles (usec): 00:41:14.968 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:14.968 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.968 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:41:14.968 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.968 | 99.99th=[43779] 00:41:14.968 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1899.79, stdev=47.95, samples=19 00:41:14.968 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:41:14.968 lat (msec) : 50=100.00% 00:41:14.968 cpu : usr=98.30%, sys=1.27%, ctx=16, majf=0, minf=9 00:41:14.968 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:41:14.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.968 filename1: (groupid=0, jobs=1): err= 0: pid=958380: Sun Nov 17 19:01:59 2024 00:41:14.968 read: IOPS=471, BW=1885KiB/s (1931kB/s)(18.4MiB/10014msec) 00:41:14.968 slat (nsec): min=4171, max=99618, avg=41875.08, stdev=14323.96 00:41:14.968 clat (usec): min=21568, max=43985, avg=33554.91, stdev=2116.23 00:41:14.968 lat (usec): min=21607, max=44017, avg=33596.78, stdev=2115.77 00:41:14.968 clat percentiles (usec): 00:41:14.968 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:14.968 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.968 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[39060], 00:41:14.968 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.968 | 99.99th=[43779] 00:41:14.968 bw ( KiB/s): min= 1650, max= 2048, per=4.15%, avg=1880.90, stdev=86.03, samples=20 00:41:14.968 iops : min= 412, max= 512, avg=470.20, stdev=21.58, samples=20 00:41:14.968 lat (msec) : 50=100.00% 00:41:14.968 cpu : usr=98.27%, sys=1.33%, ctx=14, majf=0, minf=9 00:41:14.968 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.968 filename1: (groupid=0, jobs=1): err= 0: pid=958381: Sun Nov 17 19:01:59 2024 00:41:14.968 read: IOPS=471, BW=1884KiB/s (1929kB/s)(18.4MiB/10021msec) 00:41:14.968 slat (nsec): min=6913, max=98264, avg=37801.76, stdev=14202.30 00:41:14.968 clat (usec): min=15209, max=50533, avg=33627.97, stdev=2303.28 00:41:14.968 lat (usec): min=15218, max=50586, avg=33665.77, stdev=2302.95 00:41:14.968 clat percentiles (usec): 00:41:14.968 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:41:14.968 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.968 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[38536], 00:41:14.968 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45876], 99.95th=[46400], 00:41:14.968 | 99.99th=[50594] 00:41:14.968 bw ( KiB/s): min= 1631, max= 1920, per=4.14%, avg=1879.95, stdev=78.46, samples=20 00:41:14.968 iops : min= 407, max= 480, avg=469.95, stdev=19.74, samples=20 00:41:14.968 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:41:14.968 cpu : usr=97.10%, sys=1.82%, ctx=203, majf=0, minf=9 00:41:14.968 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.968 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.969 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.969 filename1: (groupid=0, jobs=1): err= 0: pid=958382: Sun Nov 17 19:01:59 2024 00:41:14.969 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:41:14.969 slat (usec): min=11, max=137, avg=43.96, stdev=16.45 00:41:14.969 clat (usec): min=26321, max=43987, avg=33526.53, stdev=2065.10 00:41:14.969 lat (usec): min=26337, max=44019, avg=33570.48, stdev=2064.35 00:41:14.969 clat percentiles (usec): 00:41:14.969 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:14.969 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.969 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:41:14.969 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.969 | 99.99th=[43779] 00:41:14.969 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1899.79, stdev=47.95, samples=19 00:41:14.969 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:41:14.969 lat (msec) : 50=100.00% 00:41:14.969 cpu : usr=97.48%, sys=1.59%, ctx=203, majf=0, minf=9 00:41:14.969 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.969 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.969 filename1: (groupid=0, jobs=1): err= 0: pid=958383: Sun Nov 17 19:01:59 2024 00:41:14.969 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:41:14.969 slat (usec): min=8, max=102, avg=29.35, stdev=14.24 00:41:14.969 clat (usec): min=26592, max=43970, avg=33683.13, stdev=2050.38 00:41:14.969 lat (usec): min=26607, max=43987, avg=33712.48, stdev=2048.97 00:41:14.969 clat percentiles (usec): 00:41:14.969 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:41:14.969 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:14.969 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:41:14.969 | 99.00th=[43254], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:41:14.969 | 99.99th=[43779] 00:41:14.969 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1899.79, stdev=47.95, samples=19 00:41:14.969 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:41:14.969 lat (msec) : 50=100.00% 00:41:14.969 cpu : usr=97.71%, sys=1.72%, ctx=67, majf=0, minf=9 00:41:14.969 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.969 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.969 filename1: (groupid=0, jobs=1): err= 0: pid=958384: Sun Nov 17 19:01:59 2024 00:41:14.969 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.6MiB/10008msec) 00:41:14.969 slat (nsec): min=7994, max=85961, avg=31204.36, stdev=15005.15 00:41:14.969 clat (usec): min=12838, max=45390, avg=33292.03, stdev=2965.73 00:41:14.969 lat (usec): min=12874, max=45435, avg=33323.23, stdev=2963.98 00:41:14.969 clat percentiles (usec): 00:41:14.969 | 1.00th=[18744], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:41:14.969 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:14.969 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:41:14.969 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[45351], 00:41:14.969 | 99.99th=[45351] 00:41:14.969 bw ( KiB/s): min= 1536, max= 2224, per=4.20%, avg=1903.20, stdev=118.54, samples=20 00:41:14.969 iops : min= 384, max= 556, avg=475.80, stdev=29.64, samples=20 00:41:14.969 lat (msec) : 20=1.21%, 50=98.79% 00:41:14.969 cpu : usr=98.23%, sys=1.34%, ctx=23, majf=0, minf=9 00:41:14.969 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:14.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 issued rwts: total=4774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.969 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.969 filename1: (groupid=0, jobs=1): err= 0: pid=958385: Sun Nov 17 19:01:59 2024 00:41:14.969 read: IOPS=470, BW=1881KiB/s (1926kB/s)(18.4MiB/10002msec) 00:41:14.969 slat (usec): min=8, max=101, avg=41.83, stdev=16.21 00:41:14.969 clat (usec): min=20907, max=66365, avg=33640.08, stdev=2931.19 00:41:14.969 lat (usec): min=20922, max=66386, avg=33681.92, stdev=2929.47 00:41:14.969 clat percentiles (usec): 00:41:14.969 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:14.969 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.969 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[38536], 00:41:14.969 | 99.00th=[43254], 99.50th=[43779], 99.90th=[66323], 99.95th=[66323], 00:41:14.969 | 99.99th=[66323] 00:41:14.969 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1886.47, stdev=83.19, samples=19 00:41:14.969 iops : min= 416, max= 480, avg=471.58, stdev=20.91, samples=19 00:41:14.969 lat (msec) : 50=99.62%, 100=0.38% 00:41:14.969 cpu : usr=97.49%, sys=1.68%, ctx=76, majf=0, minf=9 00:41:14.969 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.969 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.969 filename1: (groupid=0, jobs=1): err= 0: pid=958386: Sun Nov 17 19:01:59 2024 00:41:14.969 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10008msec) 00:41:14.969 slat (nsec): min=7864, max=97918, avg=26032.27, stdev=18263.53 00:41:14.969 clat (usec): min=7165, max=43743, avg=33599.30, stdev=2565.61 00:41:14.969 lat (usec): min=7212, max=43760, avg=33625.33, stdev=2563.17 00:41:14.969 clat percentiles (usec): 00:41:14.969 | 1.00th=[28443], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:41:14.969 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:14.969 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:41:14.969 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.969 | 99.99th=[43779] 00:41:14.969 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1888.00, stdev=91.69, samples=20 00:41:14.969 iops : min= 384, max= 480, avg=472.00, stdev=22.92, samples=20 00:41:14.969 lat (msec) : 10=0.27%, 20=0.06%, 50=99.66% 00:41:14.969 cpu : usr=98.15%, sys=1.44%, ctx=15, majf=0, minf=9 00:41:14.969 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:14.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.969 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.969 filename1: (groupid=0, jobs=1): err= 0: pid=958387: Sun Nov 17 19:01:59 2024 00:41:14.969 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:41:14.969 slat (usec): min=11, max=105, avg=41.38, stdev=15.07 00:41:14.969 clat (usec): min=21282, max=43783, avg=33551.27, stdev=2130.93 00:41:14.969 lat (usec): min=21308, max=43814, avg=33592.65, stdev=2129.83 00:41:14.969 clat percentiles (usec): 00:41:14.969 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:14.969 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.969 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:41:14.969 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.969 | 99.99th=[43779] 00:41:14.969 bw ( KiB/s): min= 1664, max= 1920, per=4.18%, avg=1893.05, stdev=68.52, samples=19 00:41:14.969 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:41:14.969 lat (msec) : 50=100.00% 00:41:14.969 cpu : usr=97.92%, sys=1.28%, ctx=90, majf=0, minf=9 00:41:14.969 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:14.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.969 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.969 filename2: (groupid=0, jobs=1): err= 0: pid=958388: Sun Nov 17 19:01:59 2024 00:41:14.969 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10018msec) 00:41:14.969 slat (usec): min=8, max=123, avg=25.83, stdev=22.62 00:41:14.969 clat (usec): min=24874, max=43992, avg=33623.23, stdev=2105.46 00:41:14.969 lat (usec): min=24897, max=44012, avg=33649.06, stdev=2102.67 00:41:14.969 clat percentiles (usec): 00:41:14.969 | 1.00th=[27919], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:41:14.969 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:14.969 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:41:14.969 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.969 | 99.99th=[43779] 00:41:14.969 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1888.00, stdev=70.42, samples=20 00:41:14.969 iops : min= 416, max= 480, avg=472.00, stdev=17.60, samples=20 00:41:14.969 lat (msec) : 50=100.00% 00:41:14.969 cpu : usr=98.42%, sys=1.17%, ctx=16, majf=0, minf=9 00:41:14.969 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:14.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.969 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.969 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.969 filename2: (groupid=0, jobs=1): err= 0: pid=958389: Sun Nov 17 19:01:59 2024 00:41:14.969 read: IOPS=471, BW=1885KiB/s (1931kB/s)(18.4MiB/10008msec) 00:41:14.969 slat (usec): min=9, max=130, avg=43.01, stdev=15.45 00:41:14.969 clat (usec): min=7662, max=62393, avg=33526.52, stdev=2999.37 00:41:14.969 lat (usec): min=7684, max=62418, avg=33569.53, stdev=2999.09 00:41:14.969 clat percentiles (usec): 00:41:14.969 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:14.969 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.969 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[39060], 00:41:14.969 | 99.00th=[43254], 99.50th=[43779], 99.90th=[62129], 99.95th=[62129], 00:41:14.970 | 99.99th=[62653] 00:41:14.970 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1881.60, stdev=84.09, samples=20 00:41:14.970 iops : min= 416, max= 480, avg=470.40, stdev=21.02, samples=20 00:41:14.970 lat (msec) : 10=0.28%, 50=99.39%, 100=0.34% 00:41:14.970 cpu : usr=98.21%, sys=1.38%, ctx=13, majf=0, minf=9 00:41:14.970 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:14.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 issued rwts: total=4717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.970 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.970 filename2: (groupid=0, jobs=1): err= 0: pid=958390: Sun Nov 17 19:01:59 2024 00:41:14.970 read: IOPS=496, BW=1986KiB/s (2033kB/s)(19.4MiB/10011msec) 00:41:14.970 slat (nsec): min=4042, max=82489, avg=18212.17, stdev=13602.73 00:41:14.970 clat (usec): min=11364, max=62230, avg=32144.27, stdev=5905.98 00:41:14.970 lat (usec): min=11384, max=62238, avg=32162.48, stdev=5905.32 00:41:14.970 clat percentiles (usec): 00:41:14.970 | 1.00th=[14484], 5.00th=[22152], 10.00th=[23987], 20.00th=[27919], 00:41:14.970 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.970 | 70.00th=[33424], 80.00th=[33817], 90.00th=[39060], 95.00th=[42206], 00:41:14.970 | 99.00th=[49546], 99.50th=[54264], 99.90th=[58459], 99.95th=[62129], 00:41:14.970 | 99.99th=[62129] 00:41:14.970 bw ( KiB/s): min= 1715, max= 2240, per=4.38%, avg=1985.84, stdev=115.71, samples=19 00:41:14.970 iops : min= 428, max= 560, avg=496.42, stdev=29.03, samples=19 00:41:14.970 lat (msec) : 20=4.67%, 50=94.45%, 100=0.89% 00:41:14.970 cpu : usr=96.85%, sys=2.03%, ctx=195, majf=0, minf=9 00:41:14.970 IO depths : 1=0.2%, 2=0.4%, 4=3.3%, 8=80.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:41:14.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 complete : 0=0.0%, 4=89.1%, 8=8.7%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 issued rwts: total=4970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.970 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.970 filename2: (groupid=0, jobs=1): err= 0: pid=958391: Sun Nov 17 19:01:59 2024 00:41:14.970 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10015msec) 00:41:14.970 slat (usec): min=12, max=139, avg=47.69, stdev=18.92 00:41:14.970 clat (usec): min=21437, max=47101, avg=33506.88, stdev=2195.91 00:41:14.970 lat (usec): min=21478, max=47127, avg=33554.58, stdev=2195.57 00:41:14.970 clat percentiles (usec): 00:41:14.970 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:14.970 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:41:14.970 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[39060], 00:41:14.970 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.970 | 99.99th=[46924] 00:41:14.970 bw ( KiB/s): min= 1637, max= 2048, per=4.15%, avg=1880.25, stdev=97.21, samples=20 00:41:14.970 iops : min= 409, max= 512, avg=470.05, stdev=24.34, samples=20 00:41:14.970 lat (msec) : 50=100.00% 00:41:14.970 cpu : usr=98.34%, sys=1.24%, ctx=22, majf=0, minf=9 00:41:14.970 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.970 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.970 filename2: (groupid=0, jobs=1): err= 0: pid=958392: Sun Nov 17 19:01:59 2024 00:41:14.970 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:41:14.970 slat (nsec): min=11168, max=87683, avg=39990.01, stdev=11561.08 00:41:14.970 clat (usec): min=26282, max=44023, avg=33578.71, stdev=2052.61 00:41:14.970 lat (usec): min=26293, max=44043, avg=33618.70, stdev=2051.83 00:41:14.970 clat percentiles (usec): 00:41:14.970 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:14.970 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.970 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:41:14.970 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.970 | 99.99th=[43779] 00:41:14.970 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1899.79, stdev=47.95, samples=19 00:41:14.970 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:41:14.970 lat (msec) : 50=100.00% 00:41:14.970 cpu : usr=97.70%, sys=1.55%, ctx=108, majf=0, minf=9 00:41:14.970 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:14.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.970 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.970 filename2: (groupid=0, jobs=1): err= 0: pid=958393: Sun Nov 17 19:01:59 2024 00:41:14.970 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:41:14.970 slat (usec): min=13, max=128, avg=45.29, stdev=16.93 00:41:14.970 clat (usec): min=26304, max=44008, avg=33511.83, stdev=2061.18 00:41:14.970 lat (usec): min=26319, max=44029, avg=33557.13, stdev=2060.54 00:41:14.970 clat percentiles (usec): 00:41:14.970 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:41:14.970 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.970 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:41:14.970 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.970 | 99.99th=[43779] 00:41:14.970 bw ( KiB/s): min= 1792, max= 1920, per=4.19%, avg=1899.79, stdev=47.95, samples=19 00:41:14.970 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:41:14.970 lat (msec) : 50=100.00% 00:41:14.970 cpu : usr=98.41%, sys=1.17%, ctx=18, majf=0, minf=9 00:41:14.970 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.970 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.970 filename2: (groupid=0, jobs=1): err= 0: pid=958394: Sun Nov 17 19:01:59 2024 00:41:14.970 read: IOPS=470, BW=1881KiB/s (1926kB/s)(18.4MiB/10002msec) 00:41:14.970 slat (usec): min=15, max=100, avg=42.98, stdev=17.45 00:41:14.970 clat (usec): min=13135, max=87314, avg=33613.18, stdev=3005.23 00:41:14.970 lat (usec): min=13172, max=87350, avg=33656.15, stdev=3003.57 00:41:14.970 clat percentiles (usec): 00:41:14.970 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:14.970 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.970 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[38536], 00:41:14.970 | 99.00th=[43254], 99.50th=[43254], 99.90th=[65799], 99.95th=[65799], 00:41:14.970 | 99.99th=[87557] 00:41:14.970 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1886.47, stdev=83.19, samples=19 00:41:14.970 iops : min= 416, max= 480, avg=471.58, stdev=20.91, samples=19 00:41:14.970 lat (msec) : 20=0.04%, 50=99.62%, 100=0.34% 00:41:14.970 cpu : usr=98.48%, sys=1.06%, ctx=13, majf=0, minf=9 00:41:14.970 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.970 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.970 filename2: (groupid=0, jobs=1): err= 0: pid=958395: Sun Nov 17 19:01:59 2024 00:41:14.970 read: IOPS=471, BW=1886KiB/s (1932kB/s)(18.4MiB/10008msec) 00:41:14.970 slat (nsec): min=10588, max=95576, avg=41240.32, stdev=11551.70 00:41:14.970 clat (usec): min=21523, max=44028, avg=33558.06, stdev=2096.70 00:41:14.970 lat (usec): min=21568, max=44053, avg=33599.30, stdev=2096.69 00:41:14.970 clat percentiles (usec): 00:41:14.970 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:41:14.970 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:14.970 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:41:14.970 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:14.970 | 99.99th=[43779] 00:41:14.970 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1881.60, stdev=93.78, samples=20 00:41:14.970 iops : min= 416, max= 512, avg=470.40, stdev=23.45, samples=20 00:41:14.970 lat (msec) : 50=100.00% 00:41:14.970 cpu : usr=98.31%, sys=1.28%, ctx=16, majf=0, minf=9 00:41:14.970 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:14.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:14.970 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:14.970 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:14.970 00:41:14.970 Run status group 0 (all jobs): 00:41:14.970 READ: bw=44.3MiB/s (46.4MB/s), 1881KiB/s-1986KiB/s (1926kB/s-2033kB/s), io=444MiB (465MB), run=10002-10021msec 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.970 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 bdev_null0 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 [2024-11-17 19:02:00.202481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 bdev_null1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:14.971 { 00:41:14.971 "params": { 00:41:14.971 "name": "Nvme$subsystem", 00:41:14.971 "trtype": "$TEST_TRANSPORT", 00:41:14.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:14.971 "adrfam": "ipv4", 00:41:14.971 "trsvcid": "$NVMF_PORT", 00:41:14.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:14.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:14.971 "hdgst": ${hdgst:-false}, 00:41:14.971 "ddgst": ${ddgst:-false} 00:41:14.971 }, 00:41:14.971 "method": "bdev_nvme_attach_controller" 00:41:14.971 } 00:41:14.971 EOF 00:41:14.971 )") 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:14.971 { 00:41:14.971 "params": { 00:41:14.971 "name": "Nvme$subsystem", 00:41:14.971 "trtype": "$TEST_TRANSPORT", 00:41:14.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:14.971 "adrfam": "ipv4", 00:41:14.971 "trsvcid": "$NVMF_PORT", 00:41:14.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:14.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:14.971 "hdgst": ${hdgst:-false}, 00:41:14.971 "ddgst": ${ddgst:-false} 00:41:14.971 }, 00:41:14.971 "method": "bdev_nvme_attach_controller" 00:41:14.971 } 00:41:14.971 EOF 00:41:14.971 )") 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:14.971 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:14.972 "params": { 00:41:14.972 "name": "Nvme0", 00:41:14.972 "trtype": "tcp", 00:41:14.972 "traddr": "10.0.0.2", 00:41:14.972 "adrfam": "ipv4", 00:41:14.972 "trsvcid": "4420", 00:41:14.972 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:14.972 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:14.972 "hdgst": false, 00:41:14.972 "ddgst": false 00:41:14.972 }, 00:41:14.972 "method": "bdev_nvme_attach_controller" 00:41:14.972 },{ 00:41:14.972 "params": { 00:41:14.972 "name": "Nvme1", 00:41:14.972 "trtype": "tcp", 00:41:14.972 "traddr": "10.0.0.2", 00:41:14.972 "adrfam": "ipv4", 00:41:14.972 "trsvcid": "4420", 00:41:14.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:14.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:14.972 "hdgst": false, 00:41:14.972 "ddgst": false 00:41:14.972 }, 00:41:14.972 "method": "bdev_nvme_attach_controller" 00:41:14.972 }' 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:14.972 19:02:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:14.972 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:14.972 ... 00:41:14.972 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:14.972 ... 00:41:14.972 fio-3.35 00:41:14.972 Starting 4 threads 00:41:20.241 00:41:20.241 filename0: (groupid=0, jobs=1): err= 0: pid=959673: Sun Nov 17 19:02:06 2024 00:41:20.241 read: IOPS=1808, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5001msec) 00:41:20.241 slat (nsec): min=4458, max=67998, avg=19016.38, stdev=9597.33 00:41:20.241 clat (usec): min=778, max=9434, avg=4352.93, stdev=625.92 00:41:20.241 lat (usec): min=790, max=9454, avg=4371.95, stdev=626.02 00:41:20.241 clat percentiles (usec): 00:41:20.241 | 1.00th=[ 2933], 5.00th=[ 3654], 10.00th=[ 3884], 20.00th=[ 4047], 00:41:20.241 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:20.241 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5276], 95.00th=[ 5407], 00:41:20.241 | 99.00th=[ 6390], 99.50th=[ 7177], 99.90th=[ 8717], 99.95th=[ 9110], 00:41:20.241 | 99.99th=[ 9372] 00:41:20.241 bw ( KiB/s): min=11824, max=15552, per=24.97%, avg=14419.56, stdev=1370.33, samples=9 00:41:20.241 iops : min= 1478, max= 1944, avg=1802.44, stdev=171.29, samples=9 00:41:20.241 lat (usec) : 1000=0.06% 00:41:20.241 lat (msec) : 2=0.34%, 4=15.17%, 10=84.43% 00:41:20.241 cpu : usr=95.86%, sys=3.68%, ctx=7, majf=0, minf=71 00:41:20.241 IO depths : 1=0.5%, 2=18.1%, 4=55.1%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.241 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.241 issued rwts: total=9045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.241 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.241 filename0: (groupid=0, jobs=1): err= 0: pid=959675: Sun Nov 17 19:02:06 2024 00:41:20.241 read: IOPS=1805, BW=14.1MiB/s (14.8MB/s)(70.6MiB/5001msec) 00:41:20.241 slat (nsec): min=4023, max=70022, avg=19694.48, stdev=9761.02 00:41:20.241 clat (usec): min=620, max=9596, avg=4353.43, stdev=710.89 00:41:20.241 lat (usec): min=627, max=9605, avg=4373.13, stdev=710.91 00:41:20.241 clat percentiles (usec): 00:41:20.242 | 1.00th=[ 1860], 5.00th=[ 3654], 10.00th=[ 3884], 20.00th=[ 4047], 00:41:20.242 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:20.242 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5276], 95.00th=[ 5407], 00:41:20.242 | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[ 8717], 99.95th=[ 9110], 00:41:20.242 | 99.99th=[ 9634] 00:41:20.242 bw ( KiB/s): min=11904, max=15152, per=24.91%, avg=14389.33, stdev=1292.59, samples=9 00:41:20.242 iops : min= 1488, max= 1894, avg=1798.67, stdev=161.57, samples=9 00:41:20.242 lat (usec) : 750=0.02%, 1000=0.10% 00:41:20.242 lat (msec) : 2=0.95%, 4=15.66%, 10=83.27% 00:41:20.242 cpu : usr=96.08%, sys=3.46%, ctx=7, majf=0, minf=91 00:41:20.242 IO depths : 1=0.4%, 2=20.0%, 4=53.6%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.242 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.242 issued rwts: total=9031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.242 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.242 filename1: (groupid=0, jobs=1): err= 0: pid=959676: Sun Nov 17 19:02:06 2024 00:41:20.242 read: IOPS=1802, BW=14.1MiB/s (14.8MB/s)(70.4MiB/5001msec) 00:41:20.242 slat (nsec): min=4260, max=69515, avg=19699.53, stdev=9935.37 00:41:20.242 clat (usec): min=646, max=9242, avg=4362.19, stdev=671.33 00:41:20.242 lat (usec): min=658, max=9264, avg=4381.89, stdev=671.58 00:41:20.242 clat percentiles (usec): 00:41:20.242 | 1.00th=[ 2409], 5.00th=[ 3621], 10.00th=[ 3916], 20.00th=[ 4047], 00:41:20.242 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:20.242 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5276], 95.00th=[ 5407], 00:41:20.242 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 8848], 00:41:20.242 | 99.99th=[ 9241] 00:41:20.242 bw ( KiB/s): min=11895, max=15184, per=24.97%, avg=14419.90, stdev=1235.35, samples=10 00:41:20.242 iops : min= 1486, max= 1898, avg=1802.40, stdev=154.62, samples=10 00:41:20.242 lat (usec) : 750=0.01%, 1000=0.06% 00:41:20.242 lat (msec) : 2=0.63%, 4=14.69%, 10=84.62% 00:41:20.242 cpu : usr=96.12%, sys=3.44%, ctx=7, majf=0, minf=80 00:41:20.242 IO depths : 1=0.6%, 2=18.5%, 4=55.2%, 8=25.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.242 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.242 issued rwts: total=9016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.242 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.242 filename1: (groupid=0, jobs=1): err= 0: pid=959677: Sun Nov 17 19:02:06 2024 00:41:20.242 read: IOPS=1808, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5007msec) 00:41:20.242 slat (nsec): min=3930, max=74632, avg=20337.13, stdev=8827.80 00:41:20.242 clat (usec): min=838, max=13366, avg=4355.46, stdev=637.50 00:41:20.242 lat (usec): min=857, max=13383, avg=4375.80, stdev=636.74 00:41:20.242 clat percentiles (usec): 00:41:20.242 | 1.00th=[ 3032], 5.00th=[ 3687], 10.00th=[ 3851], 20.00th=[ 4047], 00:41:20.242 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:41:20.242 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5276], 95.00th=[ 5407], 00:41:20.242 | 99.00th=[ 5932], 99.50th=[ 6849], 99.90th=[ 8979], 99.95th=[13173], 00:41:20.242 | 99.99th=[13304] 00:41:20.242 bw ( KiB/s): min=11648, max=15328, per=25.07%, avg=14476.60, stdev=1296.38, samples=10 00:41:20.242 iops : min= 1456, max= 1916, avg=1809.50, stdev=162.01, samples=10 00:41:20.242 lat (usec) : 1000=0.04% 00:41:20.242 lat (msec) : 2=0.20%, 4=16.56%, 10=83.11%, 20=0.09% 00:41:20.242 cpu : usr=96.54%, sys=2.94%, ctx=26, majf=0, minf=74 00:41:20.242 IO depths : 1=0.3%, 2=13.5%, 4=58.9%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.242 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.242 issued rwts: total=9054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.242 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:20.242 00:41:20.242 Run status group 0 (all jobs): 00:41:20.242 READ: bw=56.4MiB/s (59.1MB/s), 14.1MiB/s-14.1MiB/s (14.8MB/s-14.8MB/s), io=282MiB (296MB), run=5001-5007msec 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.242 00:41:20.242 real 0m24.173s 00:41:20.242 user 4m33.021s 00:41:20.242 sys 0m6.056s 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:20.242 19:02:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:20.242 ************************************ 00:41:20.242 END TEST fio_dif_rand_params 00:41:20.242 ************************************ 00:41:20.242 19:02:06 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:20.242 19:02:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:20.242 19:02:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:20.242 19:02:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:20.242 ************************************ 00:41:20.242 START TEST fio_dif_digest 00:41:20.242 ************************************ 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.242 bdev_null0 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:20.242 [2024-11-17 19:02:06.622869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:20.242 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:20.243 { 00:41:20.243 "params": { 00:41:20.243 "name": "Nvme$subsystem", 00:41:20.243 "trtype": "$TEST_TRANSPORT", 00:41:20.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.243 "adrfam": "ipv4", 00:41:20.243 "trsvcid": "$NVMF_PORT", 00:41:20.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.243 "hdgst": ${hdgst:-false}, 00:41:20.243 "ddgst": ${ddgst:-false} 00:41:20.243 }, 00:41:20.243 "method": "bdev_nvme_attach_controller" 00:41:20.243 } 00:41:20.243 EOF 00:41:20.243 )") 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:20.243 "params": { 00:41:20.243 "name": "Nvme0", 00:41:20.243 "trtype": "tcp", 00:41:20.243 "traddr": "10.0.0.2", 00:41:20.243 "adrfam": "ipv4", 00:41:20.243 "trsvcid": "4420", 00:41:20.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:20.243 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:20.243 "hdgst": true, 00:41:20.243 "ddgst": true 00:41:20.243 }, 00:41:20.243 "method": "bdev_nvme_attach_controller" 00:41:20.243 }' 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:20.243 19:02:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.502 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:20.502 ... 00:41:20.502 fio-3.35 00:41:20.502 Starting 3 threads 00:41:32.713 00:41:32.713 filename0: (groupid=0, jobs=1): err= 0: pid=960527: Sun Nov 17 19:02:17 2024 00:41:32.713 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(258MiB/10047msec) 00:41:32.713 slat (nsec): min=4485, max=83845, avg=14454.64, stdev=1978.06 00:41:32.713 clat (usec): min=11344, max=51305, avg=14564.16, stdev=1470.77 00:41:32.713 lat (usec): min=11358, max=51319, avg=14578.61, stdev=1470.77 00:41:32.713 clat percentiles (usec): 00:41:32.713 | 1.00th=[12256], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:41:32.713 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:41:32.713 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:41:32.713 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18744], 99.95th=[48497], 00:41:32.713 | 99.99th=[51119] 00:41:32.713 bw ( KiB/s): min=25856, max=27136, per=32.94%, avg=26393.60, stdev=370.51, samples=20 00:41:32.713 iops : min= 202, max= 212, avg=206.20, stdev= 2.89, samples=20 00:41:32.713 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:41:32.713 cpu : usr=94.15%, sys=5.34%, ctx=25, majf=0, minf=172 00:41:32.713 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.713 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:32.713 filename0: (groupid=0, jobs=1): err= 0: pid=960528: Sun Nov 17 19:02:17 2024 00:41:32.713 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10046msec) 00:41:32.713 slat (nsec): min=5196, max=34048, avg=14709.56, stdev=1624.07 00:41:32.713 clat (usec): min=11715, max=47806, avg=14785.70, stdev=1392.81 00:41:32.713 lat (usec): min=11730, max=47821, avg=14800.41, stdev=1392.85 00:41:32.713 clat percentiles (usec): 00:41:32.713 | 1.00th=[12518], 5.00th=[13304], 10.00th=[13566], 20.00th=[13960], 00:41:32.713 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:41:32.713 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16319], 00:41:32.713 | 99.00th=[17171], 99.50th=[17433], 99.90th=[21365], 99.95th=[46400], 00:41:32.713 | 99.99th=[47973] 00:41:32.713 bw ( KiB/s): min=25344, max=26880, per=32.44%, avg=25996.80, stdev=402.42, samples=20 00:41:32.713 iops : min= 198, max= 210, avg=203.10, stdev= 3.14, samples=20 00:41:32.713 lat (msec) : 20=99.85%, 50=0.15% 00:41:32.713 cpu : usr=93.38%, sys=5.82%, ctx=204, majf=0, minf=126 00:41:32.713 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.713 issued rwts: total=2033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:32.713 filename0: (groupid=0, jobs=1): err= 0: pid=960529: Sun Nov 17 19:02:17 2024 00:41:32.713 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(274MiB/10047msec) 00:41:32.713 slat (nsec): min=4331, max=65746, avg=15739.22, stdev=3535.74 00:41:32.713 clat (usec): min=10466, max=53178, avg=13703.68, stdev=1507.12 00:41:32.713 lat (usec): min=10482, max=53193, avg=13719.42, stdev=1507.04 00:41:32.713 clat percentiles (usec): 00:41:32.713 | 1.00th=[11207], 5.00th=[12125], 10.00th=[12518], 20.00th=[12911], 00:41:32.713 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:41:32.713 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:41:32.713 | 99.00th=[16057], 99.50th=[16319], 99.90th=[24511], 99.95th=[49021], 00:41:32.713 | 99.99th=[53216] 00:41:32.713 bw ( KiB/s): min=26368, max=28672, per=35.00%, avg=28044.80, stdev=534.90, samples=20 00:41:32.713 iops : min= 206, max= 224, avg=219.10, stdev= 4.18, samples=20 00:41:32.713 lat (msec) : 20=99.77%, 50=0.18%, 100=0.05% 00:41:32.713 cpu : usr=92.51%, sys=6.34%, ctx=244, majf=0, minf=197 00:41:32.713 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.713 issued rwts: total=2193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.713 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:32.713 00:41:32.713 Run status group 0 (all jobs): 00:41:32.713 READ: bw=78.3MiB/s (82.1MB/s), 25.3MiB/s-27.3MiB/s (26.5MB/s-28.6MB/s), io=786MiB (824MB), run=10046-10047msec 00:41:32.713 19:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:32.713 19:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:32.713 19:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:32.713 19:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:32.713 19:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:32.713 19:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:32.713 19:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.713 19:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:32.713 19:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.714 19:02:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:32.714 19:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.714 19:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:32.714 19:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.714 00:41:32.714 real 0m11.277s 00:41:32.714 user 0m29.319s 00:41:32.714 sys 0m2.079s 00:41:32.714 19:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:32.714 19:02:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:32.714 ************************************ 00:41:32.714 END TEST fio_dif_digest 00:41:32.714 ************************************ 00:41:32.714 19:02:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:32.714 19:02:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:32.714 19:02:17 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:32.714 19:02:17 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:32.714 19:02:17 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:32.714 19:02:17 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:32.714 19:02:17 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:32.714 19:02:17 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:32.714 rmmod nvme_tcp 00:41:32.714 rmmod nvme_fabrics 00:41:32.714 rmmod nvme_keyring 00:41:32.714 19:02:17 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:32.714 19:02:17 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:32.714 19:02:17 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:32.714 19:02:17 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 954496 ']' 00:41:32.714 19:02:17 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 954496 00:41:32.714 19:02:17 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 954496 ']' 00:41:32.714 19:02:17 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 954496 00:41:32.714 19:02:17 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:41:32.714 19:02:17 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:32.714 19:02:17 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 954496 00:41:32.714 19:02:18 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:32.714 19:02:18 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:32.714 19:02:18 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 954496' 00:41:32.714 killing process with pid 954496 00:41:32.714 19:02:18 nvmf_dif -- common/autotest_common.sh@973 -- # kill 954496 00:41:32.714 19:02:18 nvmf_dif -- common/autotest_common.sh@978 -- # wait 954496 00:41:32.714 19:02:18 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:32.714 19:02:18 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:32.973 Waiting for block devices as requested 00:41:32.973 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:32.973 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:33.232 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:33.232 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:33.232 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:33.492 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:33.492 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:33.492 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:33.492 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:33.778 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:33.778 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:33.778 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:33.778 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:34.036 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:34.036 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:34.036 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:34.036 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:34.294 19:02:20 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:34.294 19:02:20 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:34.294 19:02:20 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:34.294 19:02:20 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:34.294 19:02:20 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:34.294 19:02:20 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:34.294 19:02:20 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:34.294 19:02:20 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:34.294 19:02:20 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.294 19:02:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:34.294 19:02:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.197 19:02:22 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:36.197 00:41:36.197 real 1m7.088s 00:41:36.197 user 6m29.884s 00:41:36.197 sys 0m17.753s 00:41:36.197 19:02:22 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:36.197 19:02:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:36.197 ************************************ 00:41:36.197 END TEST nvmf_dif 00:41:36.197 ************************************ 00:41:36.197 19:02:22 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:36.197 19:02:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:36.197 19:02:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:36.197 19:02:22 -- common/autotest_common.sh@10 -- # set +x 00:41:36.197 ************************************ 00:41:36.197 START TEST nvmf_abort_qd_sizes 00:41:36.197 ************************************ 00:41:36.197 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:36.457 * Looking for test storage... 00:41:36.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:36.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.457 --rc genhtml_branch_coverage=1 00:41:36.457 --rc genhtml_function_coverage=1 00:41:36.457 --rc genhtml_legend=1 00:41:36.457 --rc geninfo_all_blocks=1 00:41:36.457 --rc geninfo_unexecuted_blocks=1 00:41:36.457 00:41:36.457 ' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:36.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.457 --rc genhtml_branch_coverage=1 00:41:36.457 --rc genhtml_function_coverage=1 00:41:36.457 --rc genhtml_legend=1 00:41:36.457 --rc geninfo_all_blocks=1 00:41:36.457 --rc geninfo_unexecuted_blocks=1 00:41:36.457 00:41:36.457 ' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:36.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.457 --rc genhtml_branch_coverage=1 00:41:36.457 --rc genhtml_function_coverage=1 00:41:36.457 --rc genhtml_legend=1 00:41:36.457 --rc geninfo_all_blocks=1 00:41:36.457 --rc geninfo_unexecuted_blocks=1 00:41:36.457 00:41:36.457 ' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:36.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.457 --rc genhtml_branch_coverage=1 00:41:36.457 --rc genhtml_function_coverage=1 00:41:36.457 --rc genhtml_legend=1 00:41:36.457 --rc geninfo_all_blocks=1 00:41:36.457 --rc geninfo_unexecuted_blocks=1 00:41:36.457 00:41:36.457 ' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:36.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:36.457 19:02:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:38.991 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:38.991 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.991 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:38.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:38.992 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:38.992 19:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:38.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:38.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:41:38.992 00:41:38.992 --- 10.0.0.2 ping statistics --- 00:41:38.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.992 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:38.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:38.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:41:38.992 00:41:38.992 --- 10.0.0.1 ping statistics --- 00:41:38.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:38.992 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:38.992 19:02:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:39.928 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:39.928 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:39.928 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:39.928 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:39.928 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:39.928 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:39.928 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:39.928 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:39.928 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:41:39.928 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:41:39.928 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:41:39.928 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:41:39.928 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:41:39.928 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:41:39.928 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:41:39.928 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:41:40.868 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=965372 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 965372 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 965372 ']' 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:41.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:41.126 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:41.126 [2024-11-17 19:02:27.529995] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:41:41.126 [2024-11-17 19:02:27.530105] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:41.126 [2024-11-17 19:02:27.606359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:41.126 [2024-11-17 19:02:27.654049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:41.126 [2024-11-17 19:02:27.654101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:41.126 [2024-11-17 19:02:27.654115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:41.126 [2024-11-17 19:02:27.654126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:41.126 [2024-11-17 19:02:27.654136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:41.126 [2024-11-17 19:02:27.655552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:41.126 [2024-11-17 19:02:27.655611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:41.126 [2024-11-17 19:02:27.655694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:41.126 [2024-11-17 19:02:27.655712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:41.384 19:02:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:41.384 ************************************ 00:41:41.384 START TEST spdk_target_abort 00:41:41.384 ************************************ 00:41:41.384 19:02:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:41:41.384 19:02:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:41:41.384 19:02:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:41:41.384 19:02:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.384 19:02:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:44.674 spdk_targetn1 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:44.674 [2024-11-17 19:02:30.653607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:44.674 [2024-11-17 19:02:30.693928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:44.674 19:02:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:47.966 Initializing NVMe Controllers 00:41:47.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:47.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:47.966 Initialization complete. Launching workers. 00:41:47.966 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12321, failed: 0 00:41:47.966 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1209, failed to submit 11112 00:41:47.966 success 724, unsuccessful 485, failed 0 00:41:47.966 19:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:47.966 19:02:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:51.251 Initializing NVMe Controllers 00:41:51.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:51.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:51.251 Initialization complete. Launching workers. 00:41:51.251 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8688, failed: 0 00:41:51.251 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1236, failed to submit 7452 00:41:51.251 success 342, unsuccessful 894, failed 0 00:41:51.251 19:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:51.251 19:02:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:54.542 Initializing NVMe Controllers 00:41:54.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:41:54.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:54.542 Initialization complete. Launching workers. 00:41:54.542 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31094, failed: 0 00:41:54.542 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2766, failed to submit 28328 00:41:54.542 success 504, unsuccessful 2262, failed 0 00:41:54.542 19:02:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:41:54.542 19:02:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.542 19:02:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:54.542 19:02:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.542 19:02:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:41:54.542 19:02:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.542 19:02:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 965372 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 965372 ']' 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 965372 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 965372 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 965372' 00:41:55.479 killing process with pid 965372 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 965372 00:41:55.479 19:02:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 965372 00:41:55.738 00:41:55.739 real 0m14.308s 00:41:55.739 user 0m54.309s 00:41:55.739 sys 0m2.451s 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:55.739 ************************************ 00:41:55.739 END TEST spdk_target_abort 00:41:55.739 ************************************ 00:41:55.739 19:02:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:41:55.739 19:02:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:55.739 19:02:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:55.739 19:02:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:55.739 ************************************ 00:41:55.739 START TEST kernel_target_abort 00:41:55.739 ************************************ 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:55.739 19:02:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:57.117 Waiting for block devices as requested 00:41:57.117 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:57.117 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:57.117 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:57.376 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:57.376 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:57.376 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:57.376 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:57.635 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:57.635 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:57.635 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:57.635 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:57.895 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:57.895 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:57.896 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:58.155 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:58.155 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:58.155 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:58.415 No valid GPT data, bailing 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:41:58.415 00:41:58.415 Discovery Log Number of Records 2, Generation counter 2 00:41:58.415 =====Discovery Log Entry 0====== 00:41:58.415 trtype: tcp 00:41:58.415 adrfam: ipv4 00:41:58.415 subtype: current discovery subsystem 00:41:58.415 treq: not specified, sq flow control disable supported 00:41:58.415 portid: 1 00:41:58.415 trsvcid: 4420 00:41:58.415 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:58.415 traddr: 10.0.0.1 00:41:58.415 eflags: none 00:41:58.415 sectype: none 00:41:58.415 =====Discovery Log Entry 1====== 00:41:58.415 trtype: tcp 00:41:58.415 adrfam: ipv4 00:41:58.415 subtype: nvme subsystem 00:41:58.415 treq: not specified, sq flow control disable supported 00:41:58.415 portid: 1 00:41:58.415 trsvcid: 4420 00:41:58.415 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:58.415 traddr: 10.0.0.1 00:41:58.415 eflags: none 00:41:58.415 sectype: none 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:58.415 19:02:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:01.698 Initializing NVMe Controllers 00:42:01.698 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:01.698 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:01.698 Initialization complete. Launching workers. 00:42:01.698 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57001, failed: 0 00:42:01.698 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 57001, failed to submit 0 00:42:01.698 success 0, unsuccessful 57001, failed 0 00:42:01.698 19:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:01.698 19:02:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:04.979 Initializing NVMe Controllers 00:42:04.979 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:04.979 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:04.979 Initialization complete. Launching workers. 00:42:04.979 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99493, failed: 0 00:42:04.979 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25070, failed to submit 74423 00:42:04.979 success 0, unsuccessful 25070, failed 0 00:42:04.979 19:02:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:04.979 19:02:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:08.263 Initializing NVMe Controllers 00:42:08.263 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:08.263 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:08.263 Initialization complete. Launching workers. 00:42:08.263 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96101, failed: 0 00:42:08.263 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24030, failed to submit 72071 00:42:08.263 success 0, unsuccessful 24030, failed 0 00:42:08.263 19:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:08.263 19:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:08.263 19:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:08.263 19:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:08.263 19:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:08.263 19:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:08.263 19:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:08.263 19:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:08.263 19:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:08.263 19:02:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:09.202 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:09.202 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:09.202 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:09.202 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:09.202 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:09.202 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:09.202 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:09.202 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:09.202 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:09.202 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:09.202 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:09.202 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:09.202 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:09.202 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:09.202 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:09.202 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:10.140 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:10.140 00:42:10.140 real 0m14.527s 00:42:10.140 user 0m6.715s 00:42:10.140 sys 0m3.317s 00:42:10.140 19:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:10.140 19:02:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:10.140 ************************************ 00:42:10.140 END TEST kernel_target_abort 00:42:10.140 ************************************ 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:10.399 rmmod nvme_tcp 00:42:10.399 rmmod nvme_fabrics 00:42:10.399 rmmod nvme_keyring 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 965372 ']' 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 965372 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 965372 ']' 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 965372 00:42:10.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (965372) - No such process 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 965372 is not found' 00:42:10.399 Process with pid 965372 is not found 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:10.399 19:02:56 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:11.334 Waiting for block devices as requested 00:42:11.593 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:11.593 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:11.852 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:11.852 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:11.852 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:11.852 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:12.111 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:12.111 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:12.111 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:12.111 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:12.370 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:12.370 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:12.370 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:12.370 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:12.640 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:12.640 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:12.640 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:12.905 19:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:12.905 19:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:12.905 19:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:12.905 19:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:12.905 19:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:12.905 19:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:12.905 19:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:12.905 19:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:12.905 19:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:12.905 19:02:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:12.905 19:02:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:14.808 19:03:01 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:14.808 00:42:14.808 real 0m38.580s 00:42:14.808 user 1m3.321s 00:42:14.808 sys 0m9.373s 00:42:14.808 19:03:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:14.808 19:03:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:14.808 ************************************ 00:42:14.808 END TEST nvmf_abort_qd_sizes 00:42:14.808 ************************************ 00:42:14.808 19:03:01 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:14.808 19:03:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:14.808 19:03:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:14.808 19:03:01 -- common/autotest_common.sh@10 -- # set +x 00:42:14.808 ************************************ 00:42:14.808 START TEST keyring_file 00:42:14.808 ************************************ 00:42:14.808 19:03:01 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:15.068 * Looking for test storage... 00:42:15.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:15.068 19:03:01 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:15.068 19:03:01 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:42:15.068 19:03:01 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:15.068 19:03:01 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:15.068 19:03:01 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:15.068 19:03:01 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.068 --rc genhtml_branch_coverage=1 00:42:15.068 --rc genhtml_function_coverage=1 00:42:15.068 --rc genhtml_legend=1 00:42:15.068 --rc geninfo_all_blocks=1 00:42:15.068 --rc geninfo_unexecuted_blocks=1 00:42:15.068 00:42:15.068 ' 00:42:15.068 19:03:01 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.068 --rc genhtml_branch_coverage=1 00:42:15.068 --rc genhtml_function_coverage=1 00:42:15.068 --rc genhtml_legend=1 00:42:15.068 --rc geninfo_all_blocks=1 00:42:15.068 --rc geninfo_unexecuted_blocks=1 00:42:15.068 00:42:15.068 ' 00:42:15.068 19:03:01 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.068 --rc genhtml_branch_coverage=1 00:42:15.068 --rc genhtml_function_coverage=1 00:42:15.068 --rc genhtml_legend=1 00:42:15.068 --rc geninfo_all_blocks=1 00:42:15.068 --rc geninfo_unexecuted_blocks=1 00:42:15.068 00:42:15.068 ' 00:42:15.068 19:03:01 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.068 --rc genhtml_branch_coverage=1 00:42:15.068 --rc genhtml_function_coverage=1 00:42:15.068 --rc genhtml_legend=1 00:42:15.068 --rc geninfo_all_blocks=1 00:42:15.068 --rc geninfo_unexecuted_blocks=1 00:42:15.068 00:42:15.068 ' 00:42:15.068 19:03:01 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:15.068 19:03:01 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:15.068 19:03:01 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:15.068 19:03:01 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:15.069 19:03:01 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.069 19:03:01 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.069 19:03:01 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.069 19:03:01 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:15.069 19:03:01 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:15.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SEPCD4reIJ 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SEPCD4reIJ 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SEPCD4reIJ 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.SEPCD4reIJ 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nxdbSLH33w 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:15.069 19:03:01 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nxdbSLH33w 00:42:15.069 19:03:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nxdbSLH33w 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.nxdbSLH33w 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@30 -- # tgtpid=971304 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:15.069 19:03:01 keyring_file -- keyring/file.sh@32 -- # waitforlisten 971304 00:42:15.069 19:03:01 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 971304 ']' 00:42:15.069 19:03:01 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:15.069 19:03:01 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:15.069 19:03:01 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:15.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:15.069 19:03:01 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:15.069 19:03:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:15.328 [2024-11-17 19:03:01.686339] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:15.328 [2024-11-17 19:03:01.686462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971304 ] 00:42:15.328 [2024-11-17 19:03:01.756937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.328 [2024-11-17 19:03:01.803615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:15.586 19:03:02 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:15.586 [2024-11-17 19:03:02.062494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:15.586 null0 00:42:15.586 [2024-11-17 19:03:02.094559] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:15.586 [2024-11-17 19:03:02.095045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.586 19:03:02 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.586 19:03:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:15.586 [2024-11-17 19:03:02.118609] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:15.586 request: 00:42:15.586 { 00:42:15.586 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:15.586 "secure_channel": false, 00:42:15.586 "listen_address": { 00:42:15.587 "trtype": "tcp", 00:42:15.587 "traddr": "127.0.0.1", 00:42:15.587 "trsvcid": "4420" 00:42:15.587 }, 00:42:15.587 "method": "nvmf_subsystem_add_listener", 00:42:15.587 "req_id": 1 00:42:15.587 } 00:42:15.587 Got JSON-RPC error response 00:42:15.587 response: 00:42:15.587 { 00:42:15.587 "code": -32602, 00:42:15.587 "message": "Invalid parameters" 00:42:15.587 } 00:42:15.587 19:03:02 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:15.587 19:03:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:15.587 19:03:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:15.587 19:03:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:15.587 19:03:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:15.587 19:03:02 keyring_file -- keyring/file.sh@47 -- # bperfpid=971337 00:42:15.587 19:03:02 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:15.587 19:03:02 keyring_file -- keyring/file.sh@49 -- # waitforlisten 971337 /var/tmp/bperf.sock 00:42:15.587 19:03:02 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 971337 ']' 00:42:15.587 19:03:02 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:15.587 19:03:02 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:15.587 19:03:02 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:15.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:15.587 19:03:02 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:15.587 19:03:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:15.845 [2024-11-17 19:03:02.168414] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:15.845 [2024-11-17 19:03:02.168479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971337 ] 00:42:15.845 [2024-11-17 19:03:02.233763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.845 [2024-11-17 19:03:02.279130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:15.845 19:03:02 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:15.845 19:03:02 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:15.845 19:03:02 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SEPCD4reIJ 00:42:15.845 19:03:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SEPCD4reIJ 00:42:16.412 19:03:02 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.nxdbSLH33w 00:42:16.412 19:03:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.nxdbSLH33w 00:42:16.412 19:03:02 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:16.412 19:03:02 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:16.412 19:03:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.412 19:03:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.412 19:03:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:16.671 19:03:03 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.SEPCD4reIJ == \/\t\m\p\/\t\m\p\.\S\E\P\C\D\4\r\e\I\J ]] 00:42:16.671 19:03:03 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:16.671 19:03:03 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:16.671 19:03:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:16.671 19:03:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:16.671 19:03:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:17.237 19:03:03 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.nxdbSLH33w == \/\t\m\p\/\t\m\p\.\n\x\d\b\S\L\H\3\3\w ]] 00:42:17.237 19:03:03 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:17.237 19:03:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:17.237 19:03:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:17.237 19:03:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:17.237 19:03:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:17.237 19:03:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:17.237 19:03:03 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:17.237 19:03:03 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:17.237 19:03:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:17.237 19:03:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:17.237 19:03:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:17.237 19:03:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:17.237 19:03:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:17.527 19:03:04 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:17.527 19:03:04 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:17.527 19:03:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:17.811 [2024-11-17 19:03:04.338568] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:18.072 nvme0n1 00:42:18.072 19:03:04 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:18.072 19:03:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:18.072 19:03:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:18.072 19:03:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:18.072 19:03:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:18.072 19:03:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:18.330 19:03:04 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:18.330 19:03:04 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:18.330 19:03:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:18.330 19:03:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:18.330 19:03:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:18.330 19:03:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:18.330 19:03:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:18.588 19:03:05 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:18.588 19:03:05 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:18.588 Running I/O for 1 seconds... 00:42:19.966 10415.00 IOPS, 40.68 MiB/s 00:42:19.966 Latency(us) 00:42:19.966 [2024-11-17T18:03:06.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:19.966 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:19.966 nvme0n1 : 1.01 10466.34 40.88 0.00 0.00 12194.35 5121.52 23204.60 00:42:19.966 [2024-11-17T18:03:06.542Z] =================================================================================================================== 00:42:19.966 [2024-11-17T18:03:06.542Z] Total : 10466.34 40.88 0.00 0.00 12194.35 5121.52 23204.60 00:42:19.966 { 00:42:19.966 "results": [ 00:42:19.966 { 00:42:19.966 "job": "nvme0n1", 00:42:19.966 "core_mask": "0x2", 00:42:19.966 "workload": "randrw", 00:42:19.966 "percentage": 50, 00:42:19.966 "status": "finished", 00:42:19.966 "queue_depth": 128, 00:42:19.966 "io_size": 4096, 00:42:19.966 "runtime": 1.00742, 00:42:19.966 "iops": 10466.339758988306, 00:42:19.966 "mibps": 40.88413968354807, 00:42:19.966 "io_failed": 0, 00:42:19.966 "io_timeout": 0, 00:42:19.966 "avg_latency_us": 12194.354615860171, 00:42:19.966 "min_latency_us": 5121.517037037037, 00:42:19.966 "max_latency_us": 23204.59851851852 00:42:19.966 } 00:42:19.966 ], 00:42:19.966 "core_count": 1 00:42:19.966 } 00:42:19.966 19:03:06 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:19.966 19:03:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:19.966 19:03:06 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:19.966 19:03:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:19.966 19:03:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:19.966 19:03:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:19.966 19:03:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:19.966 19:03:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.225 19:03:06 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:20.225 19:03:06 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:20.225 19:03:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:20.225 19:03:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:20.225 19:03:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:20.225 19:03:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.225 19:03:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:20.483 19:03:06 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:20.483 19:03:06 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:20.483 19:03:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:20.483 19:03:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:20.483 19:03:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:20.483 19:03:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:20.483 19:03:06 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:20.483 19:03:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:20.483 19:03:06 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:20.483 19:03:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:20.742 [2024-11-17 19:03:07.265229] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:20.742 [2024-11-17 19:03:07.265847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252dce0 (107): Transport endpoint is not connected 00:42:20.742 [2024-11-17 19:03:07.266837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252dce0 (9): Bad file descriptor 00:42:20.742 [2024-11-17 19:03:07.267836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:20.742 [2024-11-17 19:03:07.267855] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:20.742 [2024-11-17 19:03:07.267868] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:20.742 [2024-11-17 19:03:07.267883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:20.742 request: 00:42:20.742 { 00:42:20.742 "name": "nvme0", 00:42:20.742 "trtype": "tcp", 00:42:20.742 "traddr": "127.0.0.1", 00:42:20.742 "adrfam": "ipv4", 00:42:20.742 "trsvcid": "4420", 00:42:20.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:20.742 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:20.742 "prchk_reftag": false, 00:42:20.742 "prchk_guard": false, 00:42:20.742 "hdgst": false, 00:42:20.742 "ddgst": false, 00:42:20.742 "psk": "key1", 00:42:20.742 "allow_unrecognized_csi": false, 00:42:20.742 "method": "bdev_nvme_attach_controller", 00:42:20.742 "req_id": 1 00:42:20.742 } 00:42:20.742 Got JSON-RPC error response 00:42:20.742 response: 00:42:20.742 { 00:42:20.742 "code": -5, 00:42:20.742 "message": "Input/output error" 00:42:20.742 } 00:42:20.742 19:03:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:20.742 19:03:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:20.742 19:03:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:20.742 19:03:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:20.742 19:03:07 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:20.742 19:03:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:20.742 19:03:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:20.742 19:03:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:20.742 19:03:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:20.742 19:03:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:21.308 19:03:07 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:21.308 19:03:07 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:21.308 19:03:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:21.308 19:03:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:21.308 19:03:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:21.308 19:03:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:21.308 19:03:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:21.308 19:03:07 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:21.308 19:03:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:21.308 19:03:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:21.874 19:03:08 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:21.874 19:03:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:21.874 19:03:08 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:21.874 19:03:08 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:21.874 19:03:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:22.443 19:03:08 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:22.443 19:03:08 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.SEPCD4reIJ 00:42:22.443 19:03:08 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.SEPCD4reIJ 00:42:22.443 19:03:08 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:22.443 19:03:08 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.SEPCD4reIJ 00:42:22.443 19:03:08 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:22.443 19:03:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:22.443 19:03:08 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:22.443 19:03:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:22.443 19:03:08 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SEPCD4reIJ 00:42:22.443 19:03:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SEPCD4reIJ 00:42:22.443 [2024-11-17 19:03:08.981904] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SEPCD4reIJ': 0100660 00:42:22.443 [2024-11-17 19:03:08.981938] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:22.443 request: 00:42:22.443 { 00:42:22.443 "name": "key0", 00:42:22.443 "path": "/tmp/tmp.SEPCD4reIJ", 00:42:22.443 "method": "keyring_file_add_key", 00:42:22.443 "req_id": 1 00:42:22.443 } 00:42:22.443 Got JSON-RPC error response 00:42:22.443 response: 00:42:22.443 { 00:42:22.443 "code": -1, 00:42:22.443 "message": "Operation not permitted" 00:42:22.443 } 00:42:22.443 19:03:09 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:22.443 19:03:09 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:22.443 19:03:09 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:22.443 19:03:09 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:22.443 19:03:09 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.SEPCD4reIJ 00:42:22.443 19:03:09 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SEPCD4reIJ 00:42:22.443 19:03:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SEPCD4reIJ 00:42:23.009 19:03:09 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.SEPCD4reIJ 00:42:23.009 19:03:09 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:23.009 19:03:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:23.009 19:03:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:23.009 19:03:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:23.009 19:03:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:23.009 19:03:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:23.009 19:03:09 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:23.009 19:03:09 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:23.009 19:03:09 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:23.009 19:03:09 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:23.009 19:03:09 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:23.009 19:03:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:23.009 19:03:09 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:23.009 19:03:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:23.009 19:03:09 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:23.009 19:03:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:23.267 [2024-11-17 19:03:09.812149] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.SEPCD4reIJ': No such file or directory 00:42:23.267 [2024-11-17 19:03:09.812184] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:23.267 [2024-11-17 19:03:09.812217] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:23.267 [2024-11-17 19:03:09.812229] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:23.267 [2024-11-17 19:03:09.812241] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:23.267 [2024-11-17 19:03:09.812252] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:23.267 request: 00:42:23.267 { 00:42:23.267 "name": "nvme0", 00:42:23.267 "trtype": "tcp", 00:42:23.267 "traddr": "127.0.0.1", 00:42:23.267 "adrfam": "ipv4", 00:42:23.267 "trsvcid": "4420", 00:42:23.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:23.267 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:23.267 "prchk_reftag": false, 00:42:23.267 "prchk_guard": false, 00:42:23.267 "hdgst": false, 00:42:23.267 "ddgst": false, 00:42:23.267 "psk": "key0", 00:42:23.267 "allow_unrecognized_csi": false, 00:42:23.267 "method": "bdev_nvme_attach_controller", 00:42:23.267 "req_id": 1 00:42:23.267 } 00:42:23.267 Got JSON-RPC error response 00:42:23.267 response: 00:42:23.267 { 00:42:23.267 "code": -19, 00:42:23.267 "message": "No such device" 00:42:23.267 } 00:42:23.267 19:03:09 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:23.267 19:03:09 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:23.267 19:03:09 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:23.267 19:03:09 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:23.267 19:03:09 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:23.267 19:03:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:23.834 19:03:10 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:23.834 19:03:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:23.834 19:03:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:23.834 19:03:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:23.834 19:03:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:23.834 19:03:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:23.834 19:03:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.P9uq0uhHjI 00:42:23.834 19:03:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:23.834 19:03:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:23.834 19:03:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:23.834 19:03:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:23.834 19:03:10 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:23.834 19:03:10 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:23.834 19:03:10 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:23.834 19:03:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.P9uq0uhHjI 00:42:23.834 19:03:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.P9uq0uhHjI 00:42:23.834 19:03:10 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.P9uq0uhHjI 00:42:23.834 19:03:10 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.P9uq0uhHjI 00:42:23.834 19:03:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.P9uq0uhHjI 00:42:24.092 19:03:10 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:24.092 19:03:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:24.350 nvme0n1 00:42:24.351 19:03:10 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:24.351 19:03:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:24.351 19:03:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:24.351 19:03:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.351 19:03:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:24.351 19:03:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.608 19:03:11 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:24.608 19:03:11 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:24.608 19:03:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:24.867 19:03:11 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:24.867 19:03:11 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:24.867 19:03:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:24.867 19:03:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:24.867 19:03:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:25.125 19:03:11 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:25.125 19:03:11 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:25.125 19:03:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:25.125 19:03:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:25.125 19:03:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:25.125 19:03:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:25.125 19:03:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:25.384 19:03:11 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:25.384 19:03:11 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:25.384 19:03:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:25.642 19:03:12 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:25.642 19:03:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:25.642 19:03:12 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:25.902 19:03:12 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:25.902 19:03:12 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.P9uq0uhHjI 00:42:25.902 19:03:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.P9uq0uhHjI 00:42:26.160 19:03:12 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.nxdbSLH33w 00:42:26.160 19:03:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.nxdbSLH33w 00:42:26.418 19:03:12 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:26.418 19:03:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:26.986 nvme0n1 00:42:26.986 19:03:13 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:26.986 19:03:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:27.245 19:03:13 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:27.245 "subsystems": [ 00:42:27.245 { 00:42:27.245 "subsystem": "keyring", 00:42:27.245 "config": [ 00:42:27.245 { 00:42:27.245 "method": "keyring_file_add_key", 00:42:27.245 "params": { 00:42:27.245 "name": "key0", 00:42:27.245 "path": "/tmp/tmp.P9uq0uhHjI" 00:42:27.245 } 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "method": "keyring_file_add_key", 00:42:27.245 "params": { 00:42:27.245 "name": "key1", 00:42:27.245 "path": "/tmp/tmp.nxdbSLH33w" 00:42:27.245 } 00:42:27.245 } 00:42:27.245 ] 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "subsystem": "iobuf", 00:42:27.245 "config": [ 00:42:27.245 { 00:42:27.245 "method": "iobuf_set_options", 00:42:27.245 "params": { 00:42:27.245 "small_pool_count": 8192, 00:42:27.245 "large_pool_count": 1024, 00:42:27.245 "small_bufsize": 8192, 00:42:27.245 "large_bufsize": 135168, 00:42:27.245 "enable_numa": false 00:42:27.245 } 00:42:27.245 } 00:42:27.245 ] 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "subsystem": "sock", 00:42:27.245 "config": [ 00:42:27.245 { 00:42:27.245 "method": "sock_set_default_impl", 00:42:27.245 "params": { 00:42:27.245 "impl_name": "posix" 00:42:27.245 } 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "method": "sock_impl_set_options", 00:42:27.245 "params": { 00:42:27.245 "impl_name": "ssl", 00:42:27.245 "recv_buf_size": 4096, 00:42:27.245 "send_buf_size": 4096, 00:42:27.245 "enable_recv_pipe": true, 00:42:27.245 "enable_quickack": false, 00:42:27.245 "enable_placement_id": 0, 00:42:27.245 "enable_zerocopy_send_server": true, 00:42:27.245 "enable_zerocopy_send_client": false, 00:42:27.245 "zerocopy_threshold": 0, 00:42:27.245 "tls_version": 0, 00:42:27.245 "enable_ktls": false 00:42:27.245 } 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "method": "sock_impl_set_options", 00:42:27.245 "params": { 00:42:27.245 "impl_name": "posix", 00:42:27.245 "recv_buf_size": 2097152, 00:42:27.245 "send_buf_size": 2097152, 00:42:27.245 "enable_recv_pipe": true, 00:42:27.245 "enable_quickack": false, 00:42:27.245 "enable_placement_id": 0, 00:42:27.245 "enable_zerocopy_send_server": true, 00:42:27.245 "enable_zerocopy_send_client": false, 00:42:27.245 "zerocopy_threshold": 0, 00:42:27.245 "tls_version": 0, 00:42:27.245 "enable_ktls": false 00:42:27.245 } 00:42:27.245 } 00:42:27.245 ] 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "subsystem": "vmd", 00:42:27.245 "config": [] 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "subsystem": "accel", 00:42:27.245 "config": [ 00:42:27.245 { 00:42:27.245 "method": "accel_set_options", 00:42:27.245 "params": { 00:42:27.245 "small_cache_size": 128, 00:42:27.245 "large_cache_size": 16, 00:42:27.245 "task_count": 2048, 00:42:27.245 "sequence_count": 2048, 00:42:27.245 "buf_count": 2048 00:42:27.245 } 00:42:27.245 } 00:42:27.245 ] 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "subsystem": "bdev", 00:42:27.245 "config": [ 00:42:27.245 { 00:42:27.245 "method": "bdev_set_options", 00:42:27.245 "params": { 00:42:27.245 "bdev_io_pool_size": 65535, 00:42:27.245 "bdev_io_cache_size": 256, 00:42:27.245 "bdev_auto_examine": true, 00:42:27.245 "iobuf_small_cache_size": 128, 00:42:27.245 "iobuf_large_cache_size": 16 00:42:27.245 } 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "method": "bdev_raid_set_options", 00:42:27.245 "params": { 00:42:27.245 "process_window_size_kb": 1024, 00:42:27.245 "process_max_bandwidth_mb_sec": 0 00:42:27.245 } 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "method": "bdev_iscsi_set_options", 00:42:27.245 "params": { 00:42:27.245 "timeout_sec": 30 00:42:27.245 } 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "method": "bdev_nvme_set_options", 00:42:27.245 "params": { 00:42:27.245 "action_on_timeout": "none", 00:42:27.245 "timeout_us": 0, 00:42:27.245 "timeout_admin_us": 0, 00:42:27.245 "keep_alive_timeout_ms": 10000, 00:42:27.245 "arbitration_burst": 0, 00:42:27.245 "low_priority_weight": 0, 00:42:27.245 "medium_priority_weight": 0, 00:42:27.245 "high_priority_weight": 0, 00:42:27.245 "nvme_adminq_poll_period_us": 10000, 00:42:27.245 "nvme_ioq_poll_period_us": 0, 00:42:27.245 "io_queue_requests": 512, 00:42:27.245 "delay_cmd_submit": true, 00:42:27.245 "transport_retry_count": 4, 00:42:27.245 "bdev_retry_count": 3, 00:42:27.245 "transport_ack_timeout": 0, 00:42:27.245 "ctrlr_loss_timeout_sec": 0, 00:42:27.245 "reconnect_delay_sec": 0, 00:42:27.245 "fast_io_fail_timeout_sec": 0, 00:42:27.245 "disable_auto_failback": false, 00:42:27.245 "generate_uuids": false, 00:42:27.245 "transport_tos": 0, 00:42:27.245 "nvme_error_stat": false, 00:42:27.245 "rdma_srq_size": 0, 00:42:27.245 "io_path_stat": false, 00:42:27.245 "allow_accel_sequence": false, 00:42:27.245 "rdma_max_cq_size": 0, 00:42:27.245 "rdma_cm_event_timeout_ms": 0, 00:42:27.245 "dhchap_digests": [ 00:42:27.245 "sha256", 00:42:27.245 "sha384", 00:42:27.245 "sha512" 00:42:27.245 ], 00:42:27.245 "dhchap_dhgroups": [ 00:42:27.245 "null", 00:42:27.245 "ffdhe2048", 00:42:27.245 "ffdhe3072", 00:42:27.245 "ffdhe4096", 00:42:27.245 "ffdhe6144", 00:42:27.245 "ffdhe8192" 00:42:27.245 ] 00:42:27.245 } 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "method": "bdev_nvme_attach_controller", 00:42:27.245 "params": { 00:42:27.245 "name": "nvme0", 00:42:27.245 "trtype": "TCP", 00:42:27.245 "adrfam": "IPv4", 00:42:27.245 "traddr": "127.0.0.1", 00:42:27.245 "trsvcid": "4420", 00:42:27.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:27.245 "prchk_reftag": false, 00:42:27.245 "prchk_guard": false, 00:42:27.245 "ctrlr_loss_timeout_sec": 0, 00:42:27.245 "reconnect_delay_sec": 0, 00:42:27.245 "fast_io_fail_timeout_sec": 0, 00:42:27.245 "psk": "key0", 00:42:27.245 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:27.245 "hdgst": false, 00:42:27.245 "ddgst": false, 00:42:27.245 "multipath": "multipath" 00:42:27.245 } 00:42:27.245 }, 00:42:27.245 { 00:42:27.245 "method": "bdev_nvme_set_hotplug", 00:42:27.245 "params": { 00:42:27.246 "period_us": 100000, 00:42:27.246 "enable": false 00:42:27.246 } 00:42:27.246 }, 00:42:27.246 { 00:42:27.246 "method": "bdev_wait_for_examine" 00:42:27.246 } 00:42:27.246 ] 00:42:27.246 }, 00:42:27.246 { 00:42:27.246 "subsystem": "nbd", 00:42:27.246 "config": [] 00:42:27.246 } 00:42:27.246 ] 00:42:27.246 }' 00:42:27.246 19:03:13 keyring_file -- keyring/file.sh@115 -- # killprocess 971337 00:42:27.246 19:03:13 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 971337 ']' 00:42:27.246 19:03:13 keyring_file -- common/autotest_common.sh@958 -- # kill -0 971337 00:42:27.246 19:03:13 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:27.246 19:03:13 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:27.246 19:03:13 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 971337 00:42:27.246 19:03:13 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:27.246 19:03:13 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:27.246 19:03:13 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 971337' 00:42:27.246 killing process with pid 971337 00:42:27.246 19:03:13 keyring_file -- common/autotest_common.sh@973 -- # kill 971337 00:42:27.246 Received shutdown signal, test time was about 1.000000 seconds 00:42:27.246 00:42:27.246 Latency(us) 00:42:27.246 [2024-11-17T18:03:13.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:27.246 [2024-11-17T18:03:13.822Z] =================================================================================================================== 00:42:27.246 [2024-11-17T18:03:13.822Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:27.246 19:03:13 keyring_file -- common/autotest_common.sh@978 -- # wait 971337 00:42:27.505 19:03:13 keyring_file -- keyring/file.sh@118 -- # bperfpid=973347 00:42:27.505 19:03:13 keyring_file -- keyring/file.sh@120 -- # waitforlisten 973347 /var/tmp/bperf.sock 00:42:27.505 19:03:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 973347 ']' 00:42:27.505 19:03:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:27.505 19:03:13 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:27.505 19:03:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:27.505 19:03:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:27.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:27.505 19:03:13 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:27.505 "subsystems": [ 00:42:27.505 { 00:42:27.505 "subsystem": "keyring", 00:42:27.505 "config": [ 00:42:27.505 { 00:42:27.505 "method": "keyring_file_add_key", 00:42:27.505 "params": { 00:42:27.505 "name": "key0", 00:42:27.505 "path": "/tmp/tmp.P9uq0uhHjI" 00:42:27.505 } 00:42:27.505 }, 00:42:27.505 { 00:42:27.505 "method": "keyring_file_add_key", 00:42:27.505 "params": { 00:42:27.505 "name": "key1", 00:42:27.505 "path": "/tmp/tmp.nxdbSLH33w" 00:42:27.505 } 00:42:27.505 } 00:42:27.505 ] 00:42:27.505 }, 00:42:27.505 { 00:42:27.505 "subsystem": "iobuf", 00:42:27.505 "config": [ 00:42:27.505 { 00:42:27.505 "method": "iobuf_set_options", 00:42:27.505 "params": { 00:42:27.505 "small_pool_count": 8192, 00:42:27.505 "large_pool_count": 1024, 00:42:27.505 "small_bufsize": 8192, 00:42:27.505 "large_bufsize": 135168, 00:42:27.505 "enable_numa": false 00:42:27.505 } 00:42:27.505 } 00:42:27.505 ] 00:42:27.505 }, 00:42:27.505 { 00:42:27.505 "subsystem": "sock", 00:42:27.505 "config": [ 00:42:27.505 { 00:42:27.505 "method": "sock_set_default_impl", 00:42:27.505 "params": { 00:42:27.505 "impl_name": "posix" 00:42:27.505 } 00:42:27.505 }, 00:42:27.505 { 00:42:27.505 "method": "sock_impl_set_options", 00:42:27.505 "params": { 00:42:27.505 "impl_name": "ssl", 00:42:27.505 "recv_buf_size": 4096, 00:42:27.505 "send_buf_size": 4096, 00:42:27.505 "enable_recv_pipe": true, 00:42:27.505 "enable_quickack": false, 00:42:27.505 "enable_placement_id": 0, 00:42:27.505 "enable_zerocopy_send_server": true, 00:42:27.505 "enable_zerocopy_send_client": false, 00:42:27.505 "zerocopy_threshold": 0, 00:42:27.505 "tls_version": 0, 00:42:27.505 "enable_ktls": false 00:42:27.505 } 00:42:27.505 }, 00:42:27.505 { 00:42:27.505 "method": "sock_impl_set_options", 00:42:27.505 "params": { 00:42:27.505 "impl_name": "posix", 00:42:27.505 "recv_buf_size": 2097152, 00:42:27.505 "send_buf_size": 2097152, 00:42:27.505 "enable_recv_pipe": true, 00:42:27.505 "enable_quickack": false, 00:42:27.505 "enable_placement_id": 0, 00:42:27.505 "enable_zerocopy_send_server": true, 00:42:27.505 "enable_zerocopy_send_client": false, 00:42:27.505 "zerocopy_threshold": 0, 00:42:27.505 "tls_version": 0, 00:42:27.505 "enable_ktls": false 00:42:27.505 } 00:42:27.505 } 00:42:27.505 ] 00:42:27.505 }, 00:42:27.505 { 00:42:27.505 "subsystem": "vmd", 00:42:27.505 "config": [] 00:42:27.505 }, 00:42:27.505 { 00:42:27.505 "subsystem": "accel", 00:42:27.505 "config": [ 00:42:27.505 { 00:42:27.505 "method": "accel_set_options", 00:42:27.505 "params": { 00:42:27.505 "small_cache_size": 128, 00:42:27.505 "large_cache_size": 16, 00:42:27.505 "task_count": 2048, 00:42:27.505 "sequence_count": 2048, 00:42:27.505 "buf_count": 2048 00:42:27.505 } 00:42:27.505 } 00:42:27.505 ] 00:42:27.505 }, 00:42:27.505 { 00:42:27.505 "subsystem": "bdev", 00:42:27.505 "config": [ 00:42:27.505 { 00:42:27.505 "method": "bdev_set_options", 00:42:27.505 "params": { 00:42:27.505 "bdev_io_pool_size": 65535, 00:42:27.505 "bdev_io_cache_size": 256, 00:42:27.505 "bdev_auto_examine": true, 00:42:27.505 "iobuf_small_cache_size": 128, 00:42:27.505 "iobuf_large_cache_size": 16 00:42:27.505 } 00:42:27.505 }, 00:42:27.505 { 00:42:27.505 "method": "bdev_raid_set_options", 00:42:27.505 "params": { 00:42:27.505 "process_window_size_kb": 1024, 00:42:27.505 "process_max_bandwidth_mb_sec": 0 00:42:27.505 } 00:42:27.505 }, 00:42:27.505 { 00:42:27.505 "method": "bdev_iscsi_set_options", 00:42:27.505 "params": { 00:42:27.505 "timeout_sec": 30 00:42:27.505 } 00:42:27.505 }, 00:42:27.505 { 00:42:27.506 "method": "bdev_nvme_set_options", 00:42:27.506 "params": { 00:42:27.506 "action_on_timeout": "none", 00:42:27.506 "timeout_us": 0, 00:42:27.506 "timeout_admin_us": 0, 00:42:27.506 "keep_alive_timeout_ms": 10000, 00:42:27.506 "arbitration_burst": 0, 00:42:27.506 "low_priority_weight": 0, 00:42:27.506 "medium_priority_weight": 0, 00:42:27.506 "high_priority_weight": 0, 00:42:27.506 "nvme_adminq_poll_period_us": 10000, 00:42:27.506 "nvme_ioq_poll_period_us": 0, 00:42:27.506 "io_queue_requests": 512, 00:42:27.506 "delay_cmd_submit": true, 00:42:27.506 "transport_retry_count": 4, 00:42:27.506 "bdev_retry_count": 3, 00:42:27.506 "transport_ack_timeout": 0, 00:42:27.506 "ctrlr_loss_timeout_sec": 0, 00:42:27.506 "reconnect_delay_sec": 0, 00:42:27.506 "fast_io_fail_timeout_sec": 0, 00:42:27.506 "disable_auto_failback": false, 00:42:27.506 "generate_uuids": false, 00:42:27.506 "transport_tos": 0, 00:42:27.506 "nvme_error_stat": false, 00:42:27.506 "rdma_srq_size": 0, 00:42:27.506 "io_path_stat": false, 00:42:27.506 "allow_accel_sequence": false, 00:42:27.506 "rdma_max_cq_size": 0, 00:42:27.506 "rdma_cm_event_timeout_ms": 0, 00:42:27.506 "dhchap_digests": [ 00:42:27.506 "sha256", 00:42:27.506 "sha384", 00:42:27.506 "sha512" 00:42:27.506 ], 00:42:27.506 "dhchap_dhgroups": [ 00:42:27.506 "null", 00:42:27.506 "ffdhe2048", 00:42:27.506 "ffdhe3072", 00:42:27.506 "ffdhe4096", 00:42:27.506 "ffdhe6144", 00:42:27.506 "ffdhe8192" 00:42:27.506 ] 00:42:27.506 } 00:42:27.506 }, 00:42:27.506 { 00:42:27.506 "method": "bdev_nvme_attach_controller", 00:42:27.506 "params": { 00:42:27.506 "name": "nvme0", 00:42:27.506 "trtype": "TCP", 00:42:27.506 "adrfam": "IPv4", 00:42:27.506 "traddr": "127.0.0.1", 00:42:27.506 "trsvcid": "4420", 00:42:27.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:27.506 "prchk_reftag": false, 00:42:27.506 "prchk_guard": false, 00:42:27.506 "ctrlr_loss_timeout_sec": 0, 00:42:27.506 "reconnect_delay_sec": 0, 00:42:27.506 "fast_io_fail_timeout_sec": 0, 00:42:27.506 "psk": "key0", 00:42:27.506 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:27.506 "hdgst": false, 00:42:27.506 "ddgst": false, 00:42:27.506 "multipath": "multipath" 00:42:27.506 } 00:42:27.506 }, 00:42:27.506 { 00:42:27.506 "method": "bdev_nvme_set_hotplug", 00:42:27.506 "params": { 00:42:27.506 "period_us": 100000, 00:42:27.506 "enable": false 00:42:27.506 } 00:42:27.506 }, 00:42:27.506 { 00:42:27.506 "method": "bdev_wait_for_examine" 00:42:27.506 } 00:42:27.506 ] 00:42:27.506 }, 00:42:27.506 { 00:42:27.506 "subsystem": "nbd", 00:42:27.506 "config": [] 00:42:27.506 } 00:42:27.506 ] 00:42:27.506 }' 00:42:27.506 19:03:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:27.506 19:03:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:27.506 [2024-11-17 19:03:13.869525] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:27.506 [2024-11-17 19:03:13.869625] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973347 ] 00:42:27.506 [2024-11-17 19:03:13.938452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:27.506 [2024-11-17 19:03:13.983368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:27.764 [2024-11-17 19:03:14.161046] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:27.764 19:03:14 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:27.764 19:03:14 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:27.764 19:03:14 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:27.764 19:03:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:27.764 19:03:14 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:28.022 19:03:14 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:28.022 19:03:14 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:28.022 19:03:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:28.022 19:03:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:28.022 19:03:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:28.022 19:03:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:28.022 19:03:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:28.280 19:03:14 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:28.280 19:03:14 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:28.280 19:03:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:28.280 19:03:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:28.280 19:03:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:28.280 19:03:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:28.280 19:03:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:28.538 19:03:15 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:28.538 19:03:15 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:28.538 19:03:15 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:28.538 19:03:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:28.797 19:03:15 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:28.797 19:03:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:28.797 19:03:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.P9uq0uhHjI /tmp/tmp.nxdbSLH33w 00:42:28.797 19:03:15 keyring_file -- keyring/file.sh@20 -- # killprocess 973347 00:42:28.797 19:03:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 973347 ']' 00:42:28.797 19:03:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 973347 00:42:28.797 19:03:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973347 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973347' 00:42:29.055 killing process with pid 973347 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@973 -- # kill 973347 00:42:29.055 Received shutdown signal, test time was about 1.000000 seconds 00:42:29.055 00:42:29.055 Latency(us) 00:42:29.055 [2024-11-17T18:03:15.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:29.055 [2024-11-17T18:03:15.631Z] =================================================================================================================== 00:42:29.055 [2024-11-17T18:03:15.631Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@978 -- # wait 973347 00:42:29.055 19:03:15 keyring_file -- keyring/file.sh@21 -- # killprocess 971304 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 971304 ']' 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 971304 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:29.055 19:03:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 971304 00:42:29.312 19:03:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:29.312 19:03:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:29.312 19:03:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 971304' 00:42:29.312 killing process with pid 971304 00:42:29.312 19:03:15 keyring_file -- common/autotest_common.sh@973 -- # kill 971304 00:42:29.312 19:03:15 keyring_file -- common/autotest_common.sh@978 -- # wait 971304 00:42:29.571 00:42:29.571 real 0m14.642s 00:42:29.571 user 0m37.453s 00:42:29.571 sys 0m3.200s 00:42:29.571 19:03:16 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:29.571 19:03:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:29.571 ************************************ 00:42:29.571 END TEST keyring_file 00:42:29.571 ************************************ 00:42:29.571 19:03:16 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:42:29.571 19:03:16 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:29.571 19:03:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:29.571 19:03:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:29.571 19:03:16 -- common/autotest_common.sh@10 -- # set +x 00:42:29.571 ************************************ 00:42:29.571 START TEST keyring_linux 00:42:29.571 ************************************ 00:42:29.571 19:03:16 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:29.571 Joined session keyring: 856770349 00:42:29.571 * Looking for test storage... 00:42:29.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:29.571 19:03:16 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:29.571 19:03:16 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:42:29.571 19:03:16 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:29.830 19:03:16 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:29.830 19:03:16 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:29.830 19:03:16 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:29.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.830 --rc genhtml_branch_coverage=1 00:42:29.830 --rc genhtml_function_coverage=1 00:42:29.830 --rc genhtml_legend=1 00:42:29.830 --rc geninfo_all_blocks=1 00:42:29.830 --rc geninfo_unexecuted_blocks=1 00:42:29.830 00:42:29.830 ' 00:42:29.830 19:03:16 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:29.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.830 --rc genhtml_branch_coverage=1 00:42:29.830 --rc genhtml_function_coverage=1 00:42:29.830 --rc genhtml_legend=1 00:42:29.830 --rc geninfo_all_blocks=1 00:42:29.830 --rc geninfo_unexecuted_blocks=1 00:42:29.830 00:42:29.830 ' 00:42:29.830 19:03:16 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:29.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.830 --rc genhtml_branch_coverage=1 00:42:29.830 --rc genhtml_function_coverage=1 00:42:29.830 --rc genhtml_legend=1 00:42:29.830 --rc geninfo_all_blocks=1 00:42:29.830 --rc geninfo_unexecuted_blocks=1 00:42:29.830 00:42:29.830 ' 00:42:29.830 19:03:16 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:29.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.830 --rc genhtml_branch_coverage=1 00:42:29.830 --rc genhtml_function_coverage=1 00:42:29.830 --rc genhtml_legend=1 00:42:29.830 --rc geninfo_all_blocks=1 00:42:29.830 --rc geninfo_unexecuted_blocks=1 00:42:29.830 00:42:29.830 ' 00:42:29.830 19:03:16 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:29.830 19:03:16 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:29.830 19:03:16 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:29.830 19:03:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.830 19:03:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.830 19:03:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.830 19:03:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:29.830 19:03:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:29.830 19:03:16 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:29.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:29.831 19:03:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:29.831 19:03:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:29.831 19:03:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:29.831 19:03:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:29.831 19:03:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:29.831 19:03:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:29.831 /tmp/:spdk-test:key0 00:42:29.831 19:03:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:29.831 19:03:16 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:29.831 19:03:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:29.831 /tmp/:spdk-test:key1 00:42:29.831 19:03:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=973790 00:42:29.831 19:03:16 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:29.831 19:03:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 973790 00:42:29.831 19:03:16 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 973790 ']' 00:42:29.831 19:03:16 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:29.831 19:03:16 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:29.831 19:03:16 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:29.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:29.831 19:03:16 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:29.831 19:03:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:29.831 [2024-11-17 19:03:16.351388] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:29.831 [2024-11-17 19:03:16.351468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973790 ] 00:42:30.091 [2024-11-17 19:03:16.416870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.091 [2024-11-17 19:03:16.462102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:30.351 19:03:16 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:30.351 19:03:16 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:30.351 19:03:16 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:30.351 19:03:16 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.351 19:03:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:30.351 [2024-11-17 19:03:16.720505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:30.351 null0 00:42:30.351 [2024-11-17 19:03:16.752573] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:30.351 [2024-11-17 19:03:16.753073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:30.351 19:03:16 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.351 19:03:16 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:30.351 425959954 00:42:30.351 19:03:16 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:30.351 161870428 00:42:30.351 19:03:16 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=973796 00:42:30.351 19:03:16 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 973796 /var/tmp/bperf.sock 00:42:30.351 19:03:16 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:30.351 19:03:16 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 973796 ']' 00:42:30.351 19:03:16 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:30.351 19:03:16 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:30.351 19:03:16 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:30.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:30.351 19:03:16 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:30.351 19:03:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:30.351 [2024-11-17 19:03:16.824610] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 23.11.0 initialization... 00:42:30.351 [2024-11-17 19:03:16.824718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid973796 ] 00:42:30.351 [2024-11-17 19:03:16.892634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.609 [2024-11-17 19:03:16.939370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:30.609 19:03:17 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:30.609 19:03:17 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:42:30.609 19:03:17 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:30.609 19:03:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:30.868 19:03:17 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:30.868 19:03:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:31.126 19:03:17 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:31.126 19:03:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:31.386 [2024-11-17 19:03:17.945973] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:31.644 nvme0n1 00:42:31.644 19:03:18 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:31.644 19:03:18 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:31.644 19:03:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:31.644 19:03:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:31.644 19:03:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:31.644 19:03:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:31.902 19:03:18 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:31.902 19:03:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:31.902 19:03:18 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:31.902 19:03:18 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:31.902 19:03:18 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:31.902 19:03:18 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:31.902 19:03:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:32.160 19:03:18 keyring_linux -- keyring/linux.sh@25 -- # sn=425959954 00:42:32.160 19:03:18 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:32.160 19:03:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:32.160 19:03:18 keyring_linux -- keyring/linux.sh@26 -- # [[ 425959954 == \4\2\5\9\5\9\9\5\4 ]] 00:42:32.160 19:03:18 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 425959954 00:42:32.160 19:03:18 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:32.160 19:03:18 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:32.160 Running I/O for 1 seconds... 00:42:33.547 10133.00 IOPS, 39.58 MiB/s 00:42:33.547 Latency(us) 00:42:33.547 [2024-11-17T18:03:20.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:33.547 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:33.547 nvme0n1 : 1.01 10136.43 39.60 0.00 0.00 12545.70 6602.15 17670.45 00:42:33.547 [2024-11-17T18:03:20.123Z] =================================================================================================================== 00:42:33.547 [2024-11-17T18:03:20.123Z] Total : 10136.43 39.60 0.00 0.00 12545.70 6602.15 17670.45 00:42:33.547 { 00:42:33.547 "results": [ 00:42:33.547 { 00:42:33.547 "job": "nvme0n1", 00:42:33.547 "core_mask": "0x2", 00:42:33.547 "workload": "randread", 00:42:33.547 "status": "finished", 00:42:33.547 "queue_depth": 128, 00:42:33.547 "io_size": 4096, 00:42:33.547 "runtime": 1.012388, 00:42:33.547 "iops": 10136.429906320502, 00:42:33.547 "mibps": 39.59542932156446, 00:42:33.547 "io_failed": 0, 00:42:33.547 "io_timeout": 0, 00:42:33.547 "avg_latency_us": 12545.697020723706, 00:42:33.547 "min_latency_us": 6602.145185185185, 00:42:33.547 "max_latency_us": 17670.447407407406 00:42:33.547 } 00:42:33.547 ], 00:42:33.547 "core_count": 1 00:42:33.547 } 00:42:33.547 19:03:19 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:33.547 19:03:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:33.547 19:03:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:33.547 19:03:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:33.547 19:03:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:33.547 19:03:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:33.547 19:03:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:33.547 19:03:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:33.805 19:03:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:33.805 19:03:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:33.805 19:03:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:33.805 19:03:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:33.805 19:03:20 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:42:33.805 19:03:20 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:33.805 19:03:20 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:33.805 19:03:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:33.805 19:03:20 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:33.805 19:03:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:33.805 19:03:20 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:33.805 19:03:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:34.065 [2024-11-17 19:03:20.556759] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:34.065 [2024-11-17 19:03:20.556857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f1a90 (107): Transport endpoint is not connected 00:42:34.065 [2024-11-17 19:03:20.557846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f1a90 (9): Bad file descriptor 00:42:34.065 [2024-11-17 19:03:20.558845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:34.065 [2024-11-17 19:03:20.558868] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:34.065 [2024-11-17 19:03:20.558883] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:34.065 [2024-11-17 19:03:20.558899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:34.065 request: 00:42:34.065 { 00:42:34.065 "name": "nvme0", 00:42:34.065 "trtype": "tcp", 00:42:34.065 "traddr": "127.0.0.1", 00:42:34.065 "adrfam": "ipv4", 00:42:34.065 "trsvcid": "4420", 00:42:34.065 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:34.065 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:34.065 "prchk_reftag": false, 00:42:34.065 "prchk_guard": false, 00:42:34.065 "hdgst": false, 00:42:34.065 "ddgst": false, 00:42:34.065 "psk": ":spdk-test:key1", 00:42:34.065 "allow_unrecognized_csi": false, 00:42:34.065 "method": "bdev_nvme_attach_controller", 00:42:34.065 "req_id": 1 00:42:34.065 } 00:42:34.065 Got JSON-RPC error response 00:42:34.065 response: 00:42:34.065 { 00:42:34.065 "code": -5, 00:42:34.065 "message": "Input/output error" 00:42:34.065 } 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@33 -- # sn=425959954 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 425959954 00:42:34.065 1 links removed 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@33 -- # sn=161870428 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 161870428 00:42:34.065 1 links removed 00:42:34.065 19:03:20 keyring_linux -- keyring/linux.sh@41 -- # killprocess 973796 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 973796 ']' 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 973796 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973796 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973796' 00:42:34.065 killing process with pid 973796 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@973 -- # kill 973796 00:42:34.065 Received shutdown signal, test time was about 1.000000 seconds 00:42:34.065 00:42:34.065 Latency(us) 00:42:34.065 [2024-11-17T18:03:20.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:34.065 [2024-11-17T18:03:20.641Z] =================================================================================================================== 00:42:34.065 [2024-11-17T18:03:20.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:34.065 19:03:20 keyring_linux -- common/autotest_common.sh@978 -- # wait 973796 00:42:34.325 19:03:20 keyring_linux -- keyring/linux.sh@42 -- # killprocess 973790 00:42:34.325 19:03:20 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 973790 ']' 00:42:34.325 19:03:20 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 973790 00:42:34.325 19:03:20 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:42:34.325 19:03:20 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:34.325 19:03:20 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973790 00:42:34.325 19:03:20 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:34.325 19:03:20 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:34.325 19:03:20 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973790' 00:42:34.325 killing process with pid 973790 00:42:34.325 19:03:20 keyring_linux -- common/autotest_common.sh@973 -- # kill 973790 00:42:34.325 19:03:20 keyring_linux -- common/autotest_common.sh@978 -- # wait 973790 00:42:34.894 00:42:34.894 real 0m5.180s 00:42:34.894 user 0m10.390s 00:42:34.894 sys 0m1.602s 00:42:34.894 19:03:21 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:34.894 19:03:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:34.894 ************************************ 00:42:34.894 END TEST keyring_linux 00:42:34.894 ************************************ 00:42:34.894 19:03:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:34.894 19:03:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:34.894 19:03:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:42:34.894 19:03:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:42:34.894 19:03:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:42:34.894 19:03:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:34.894 19:03:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:34.894 19:03:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:34.894 19:03:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:42:34.894 19:03:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:34.894 19:03:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:42:34.894 19:03:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:34.894 19:03:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:34.894 19:03:21 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:42:34.894 19:03:21 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:42:34.894 19:03:21 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:42:34.894 19:03:21 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:42:34.894 19:03:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:34.894 19:03:21 -- common/autotest_common.sh@10 -- # set +x 00:42:34.894 19:03:21 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:42:34.894 19:03:21 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:42:34.894 19:03:21 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:42:34.894 19:03:21 -- common/autotest_common.sh@10 -- # set +x 00:42:36.795 INFO: APP EXITING 00:42:36.795 INFO: killing all VMs 00:42:36.795 INFO: killing vhost app 00:42:36.795 INFO: EXIT DONE 00:42:38.172 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:42:38.172 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:42:38.172 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:42:38.172 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:42:38.172 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:42:38.172 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:42:38.172 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:42:38.172 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:42:38.172 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:42:38.172 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:42:38.172 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:42:38.172 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:42:38.172 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:42:38.172 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:42:38.172 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:42:38.172 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:42:38.172 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:42:39.551 Cleaning 00:42:39.551 Removing: /var/run/dpdk/spdk0/config 00:42:39.551 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:39.551 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:39.551 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:39.551 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:39.551 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:39.551 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:39.551 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:39.551 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:39.551 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:39.551 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:39.551 Removing: /var/run/dpdk/spdk1/config 00:42:39.551 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:39.551 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:39.551 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:39.551 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:39.551 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:39.551 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:39.551 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:39.551 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:39.551 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:39.551 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:39.551 Removing: /var/run/dpdk/spdk2/config 00:42:39.551 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:39.551 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:39.551 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:39.551 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:39.551 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:39.551 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:39.551 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:39.551 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:39.551 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:39.551 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:39.551 Removing: /var/run/dpdk/spdk3/config 00:42:39.551 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:39.551 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:39.551 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:39.551 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:39.551 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:39.551 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:39.551 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:39.551 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:39.551 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:39.551 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:39.551 Removing: /var/run/dpdk/spdk4/config 00:42:39.551 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:39.551 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:39.551 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:39.551 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:39.551 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:39.551 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:39.551 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:39.551 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:39.551 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:39.551 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:39.551 Removing: /dev/shm/bdev_svc_trace.1 00:42:39.551 Removing: /dev/shm/nvmf_trace.0 00:42:39.551 Removing: /dev/shm/spdk_tgt_trace.pid590873 00:42:39.551 Removing: /var/run/dpdk/spdk0 00:42:39.551 Removing: /var/run/dpdk/spdk1 00:42:39.551 Removing: /var/run/dpdk/spdk2 00:42:39.551 Removing: /var/run/dpdk/spdk3 00:42:39.551 Removing: /var/run/dpdk/spdk4 00:42:39.551 Removing: /var/run/dpdk/spdk_pid589251 00:42:39.551 Removing: /var/run/dpdk/spdk_pid589993 00:42:39.551 Removing: /var/run/dpdk/spdk_pid590873 00:42:39.551 Removing: /var/run/dpdk/spdk_pid591269 00:42:39.551 Removing: /var/run/dpdk/spdk_pid591956 00:42:39.552 Removing: /var/run/dpdk/spdk_pid592093 00:42:39.552 Removing: /var/run/dpdk/spdk_pid592809 00:42:39.552 Removing: /var/run/dpdk/spdk_pid592820 00:42:39.552 Removing: /var/run/dpdk/spdk_pid593080 00:42:39.552 Removing: /var/run/dpdk/spdk_pid594398 00:42:39.552 Removing: /var/run/dpdk/spdk_pid595352 00:42:39.552 Removing: /var/run/dpdk/spdk_pid595749 00:42:39.552 Removing: /var/run/dpdk/spdk_pid595948 00:42:39.552 Removing: /var/run/dpdk/spdk_pid596163 00:42:39.552 Removing: /var/run/dpdk/spdk_pid596359 00:42:39.552 Removing: /var/run/dpdk/spdk_pid596523 00:42:39.552 Removing: /var/run/dpdk/spdk_pid597035 00:42:39.552 Removing: /var/run/dpdk/spdk_pid597368 00:42:39.552 Removing: /var/run/dpdk/spdk_pid597684 00:42:39.552 Removing: /var/run/dpdk/spdk_pid600180 00:42:39.552 Removing: /var/run/dpdk/spdk_pid600346 00:42:39.552 Removing: /var/run/dpdk/spdk_pid600504 00:42:39.552 Removing: /var/run/dpdk/spdk_pid600514 00:42:39.552 Removing: /var/run/dpdk/spdk_pid600934 00:42:39.552 Removing: /var/run/dpdk/spdk_pid600937 00:42:39.552 Removing: /var/run/dpdk/spdk_pid601246 00:42:39.552 Removing: /var/run/dpdk/spdk_pid601368 00:42:39.552 Removing: /var/run/dpdk/spdk_pid601533 00:42:39.552 Removing: /var/run/dpdk/spdk_pid601549 00:42:39.552 Removing: /var/run/dpdk/spdk_pid601715 00:42:39.552 Removing: /var/run/dpdk/spdk_pid601842 00:42:39.552 Removing: /var/run/dpdk/spdk_pid602215 00:42:39.552 Removing: /var/run/dpdk/spdk_pid602375 00:42:39.552 Removing: /var/run/dpdk/spdk_pid602582 00:42:39.552 Removing: /var/run/dpdk/spdk_pid604809 00:42:39.552 Removing: /var/run/dpdk/spdk_pid607448 00:42:39.552 Removing: /var/run/dpdk/spdk_pid614448 00:42:39.552 Removing: /var/run/dpdk/spdk_pid614850 00:42:39.552 Removing: /var/run/dpdk/spdk_pid617382 00:42:39.552 Removing: /var/run/dpdk/spdk_pid617544 00:42:39.552 Removing: /var/run/dpdk/spdk_pid620179 00:42:39.552 Removing: /var/run/dpdk/spdk_pid623911 00:42:39.552 Removing: /var/run/dpdk/spdk_pid625999 00:42:39.552 Removing: /var/run/dpdk/spdk_pid633089 00:42:39.552 Removing: /var/run/dpdk/spdk_pid638389 00:42:39.552 Removing: /var/run/dpdk/spdk_pid639591 00:42:39.552 Removing: /var/run/dpdk/spdk_pid640261 00:42:39.552 Removing: /var/run/dpdk/spdk_pid650630 00:42:39.552 Removing: /var/run/dpdk/spdk_pid652924 00:42:39.552 Removing: /var/run/dpdk/spdk_pid707880 00:42:39.552 Removing: /var/run/dpdk/spdk_pid711115 00:42:39.552 Removing: /var/run/dpdk/spdk_pid714930 00:42:39.552 Removing: /var/run/dpdk/spdk_pid719191 00:42:39.552 Removing: /var/run/dpdk/spdk_pid719203 00:42:39.552 Removing: /var/run/dpdk/spdk_pid719855 00:42:39.552 Removing: /var/run/dpdk/spdk_pid720458 00:42:39.552 Removing: /var/run/dpdk/spdk_pid721044 00:42:39.552 Removing: /var/run/dpdk/spdk_pid721445 00:42:39.552 Removing: /var/run/dpdk/spdk_pid721497 00:42:39.552 Removing: /var/run/dpdk/spdk_pid721704 00:42:39.552 Removing: /var/run/dpdk/spdk_pid721836 00:42:39.552 Removing: /var/run/dpdk/spdk_pid721848 00:42:39.552 Removing: /var/run/dpdk/spdk_pid722505 00:42:39.552 Removing: /var/run/dpdk/spdk_pid723154 00:42:39.552 Removing: /var/run/dpdk/spdk_pid724170 00:42:39.552 Removing: /var/run/dpdk/spdk_pid724716 00:42:39.552 Removing: /var/run/dpdk/spdk_pid724718 00:42:39.552 Removing: /var/run/dpdk/spdk_pid724980 00:42:39.552 Removing: /var/run/dpdk/spdk_pid725876 00:42:39.552 Removing: /var/run/dpdk/spdk_pid726598 00:42:39.552 Removing: /var/run/dpdk/spdk_pid731938 00:42:39.552 Removing: /var/run/dpdk/spdk_pid759722 00:42:39.552 Removing: /var/run/dpdk/spdk_pid762640 00:42:39.552 Removing: /var/run/dpdk/spdk_pid763818 00:42:39.552 Removing: /var/run/dpdk/spdk_pid765137 00:42:39.552 Removing: /var/run/dpdk/spdk_pid765278 00:42:39.552 Removing: /var/run/dpdk/spdk_pid765424 00:42:39.552 Removing: /var/run/dpdk/spdk_pid765563 00:42:39.552 Removing: /var/run/dpdk/spdk_pid766002 00:42:39.552 Removing: /var/run/dpdk/spdk_pid767317 00:42:39.552 Removing: /var/run/dpdk/spdk_pid768054 00:42:39.552 Removing: /var/run/dpdk/spdk_pid768479 00:42:39.552 Removing: /var/run/dpdk/spdk_pid769980 00:42:39.552 Removing: /var/run/dpdk/spdk_pid770403 00:42:39.552 Removing: /var/run/dpdk/spdk_pid770955 00:42:39.552 Removing: /var/run/dpdk/spdk_pid773359 00:42:39.552 Removing: /var/run/dpdk/spdk_pid777260 00:42:39.552 Removing: /var/run/dpdk/spdk_pid777261 00:42:39.552 Removing: /var/run/dpdk/spdk_pid777262 00:42:39.552 Removing: /var/run/dpdk/spdk_pid779484 00:42:39.552 Removing: /var/run/dpdk/spdk_pid781688 00:42:39.552 Removing: /var/run/dpdk/spdk_pid785210 00:42:39.552 Removing: /var/run/dpdk/spdk_pid808280 00:42:39.552 Removing: /var/run/dpdk/spdk_pid811050 00:42:39.552 Removing: /var/run/dpdk/spdk_pid814943 00:42:39.552 Removing: /var/run/dpdk/spdk_pid815893 00:42:39.552 Removing: /var/run/dpdk/spdk_pid816989 00:42:39.552 Removing: /var/run/dpdk/spdk_pid817957 00:42:39.552 Removing: /var/run/dpdk/spdk_pid820714 00:42:39.552 Removing: /var/run/dpdk/spdk_pid823290 00:42:39.552 Removing: /var/run/dpdk/spdk_pid825554 00:42:39.552 Removing: /var/run/dpdk/spdk_pid829903 00:42:39.552 Removing: /var/run/dpdk/spdk_pid829905 00:42:39.552 Removing: /var/run/dpdk/spdk_pid832689 00:42:39.552 Removing: /var/run/dpdk/spdk_pid832836 00:42:39.552 Removing: /var/run/dpdk/spdk_pid833069 00:42:39.552 Removing: /var/run/dpdk/spdk_pid833342 00:42:39.552 Removing: /var/run/dpdk/spdk_pid833347 00:42:39.552 Removing: /var/run/dpdk/spdk_pid834550 00:42:39.552 Removing: /var/run/dpdk/spdk_pid835725 00:42:39.552 Removing: /var/run/dpdk/spdk_pid836901 00:42:39.552 Removing: /var/run/dpdk/spdk_pid838344 00:42:39.552 Removing: /var/run/dpdk/spdk_pid839872 00:42:39.552 Removing: /var/run/dpdk/spdk_pid841055 00:42:39.552 Removing: /var/run/dpdk/spdk_pid844865 00:42:39.552 Removing: /var/run/dpdk/spdk_pid845257 00:42:39.552 Removing: /var/run/dpdk/spdk_pid846603 00:42:39.552 Removing: /var/run/dpdk/spdk_pid847340 00:42:39.552 Removing: /var/run/dpdk/spdk_pid851194 00:42:39.552 Removing: /var/run/dpdk/spdk_pid853060 00:42:39.552 Removing: /var/run/dpdk/spdk_pid856474 00:42:39.552 Removing: /var/run/dpdk/spdk_pid859933 00:42:39.552 Removing: /var/run/dpdk/spdk_pid866410 00:42:39.552 Removing: /var/run/dpdk/spdk_pid871505 00:42:39.552 Removing: /var/run/dpdk/spdk_pid871508 00:42:39.811 Removing: /var/run/dpdk/spdk_pid883742 00:42:39.811 Removing: /var/run/dpdk/spdk_pid884272 00:42:39.811 Removing: /var/run/dpdk/spdk_pid884684 00:42:39.811 Removing: /var/run/dpdk/spdk_pid885087 00:42:39.811 Removing: /var/run/dpdk/spdk_pid885669 00:42:39.811 Removing: /var/run/dpdk/spdk_pid886071 00:42:39.811 Removing: /var/run/dpdk/spdk_pid886480 00:42:39.811 Removing: /var/run/dpdk/spdk_pid886949 00:42:39.811 Removing: /var/run/dpdk/spdk_pid889397 00:42:39.811 Removing: /var/run/dpdk/spdk_pid889650 00:42:39.811 Removing: /var/run/dpdk/spdk_pid893332 00:42:39.811 Removing: /var/run/dpdk/spdk_pid893502 00:42:39.811 Removing: /var/run/dpdk/spdk_pid896862 00:42:39.811 Removing: /var/run/dpdk/spdk_pid899356 00:42:39.811 Removing: /var/run/dpdk/spdk_pid906877 00:42:39.811 Removing: /var/run/dpdk/spdk_pid907270 00:42:39.811 Removing: /var/run/dpdk/spdk_pid909773 00:42:39.811 Removing: /var/run/dpdk/spdk_pid910001 00:42:39.811 Removing: /var/run/dpdk/spdk_pid912548 00:42:39.811 Removing: /var/run/dpdk/spdk_pid916231 00:42:39.811 Removing: /var/run/dpdk/spdk_pid918386 00:42:39.811 Removing: /var/run/dpdk/spdk_pid924638 00:42:39.811 Removing: /var/run/dpdk/spdk_pid929826 00:42:39.811 Removing: /var/run/dpdk/spdk_pid931128 00:42:39.811 Removing: /var/run/dpdk/spdk_pid931789 00:42:39.811 Removing: /var/run/dpdk/spdk_pid942466 00:42:39.811 Removing: /var/run/dpdk/spdk_pid944716 00:42:39.811 Removing: /var/run/dpdk/spdk_pid946706 00:42:39.811 Removing: /var/run/dpdk/spdk_pid951630 00:42:39.811 Removing: /var/run/dpdk/spdk_pid951752 00:42:39.811 Removing: /var/run/dpdk/spdk_pid954662 00:42:39.811 Removing: /var/run/dpdk/spdk_pid956052 00:42:39.811 Removing: /var/run/dpdk/spdk_pid957334 00:42:39.811 Removing: /var/run/dpdk/spdk_pid958193 00:42:39.811 Removing: /var/run/dpdk/spdk_pid959596 00:42:39.811 Removing: /var/run/dpdk/spdk_pid960348 00:42:39.811 Removing: /var/run/dpdk/spdk_pid965745 00:42:39.811 Removing: /var/run/dpdk/spdk_pid966133 00:42:39.811 Removing: /var/run/dpdk/spdk_pid966523 00:42:39.811 Removing: /var/run/dpdk/spdk_pid968082 00:42:39.811 Removing: /var/run/dpdk/spdk_pid968449 00:42:39.811 Removing: /var/run/dpdk/spdk_pid968755 00:42:39.811 Removing: /var/run/dpdk/spdk_pid971304 00:42:39.811 Removing: /var/run/dpdk/spdk_pid971337 00:42:39.811 Removing: /var/run/dpdk/spdk_pid973347 00:42:39.811 Removing: /var/run/dpdk/spdk_pid973790 00:42:39.811 Removing: /var/run/dpdk/spdk_pid973796 00:42:39.811 Clean 00:42:39.811 19:03:26 -- common/autotest_common.sh@1453 -- # return 0 00:42:39.811 19:03:26 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:42:39.811 19:03:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:39.811 19:03:26 -- common/autotest_common.sh@10 -- # set +x 00:42:39.811 19:03:26 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:42:39.811 19:03:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:39.811 19:03:26 -- common/autotest_common.sh@10 -- # set +x 00:42:39.811 19:03:26 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:39.811 19:03:26 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:39.811 19:03:26 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:39.811 19:03:26 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:42:39.811 19:03:26 -- spdk/autotest.sh@398 -- # hostname 00:42:39.811 19:03:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:40.071 geninfo: WARNING: invalid characters removed from testname! 00:43:12.153 19:03:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:16.465 19:04:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:19.007 19:04:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:22.302 19:04:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:24.842 19:04:11 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:28.137 19:04:14 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:30.678 19:04:17 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:30.678 19:04:17 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:30.678 19:04:17 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:43:30.678 19:04:17 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:30.678 19:04:17 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:30.678 19:04:17 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:30.678 + [[ -n 497531 ]] 00:43:30.678 + sudo kill 497531 00:43:30.689 [Pipeline] } 00:43:30.704 [Pipeline] // stage 00:43:30.709 [Pipeline] } 00:43:30.723 [Pipeline] // timeout 00:43:30.728 [Pipeline] } 00:43:30.742 [Pipeline] // catchError 00:43:30.747 [Pipeline] } 00:43:30.764 [Pipeline] // wrap 00:43:30.770 [Pipeline] } 00:43:30.782 [Pipeline] // catchError 00:43:30.792 [Pipeline] stage 00:43:30.795 [Pipeline] { (Epilogue) 00:43:30.808 [Pipeline] catchError 00:43:30.810 [Pipeline] { 00:43:30.822 [Pipeline] echo 00:43:30.824 Cleanup processes 00:43:30.830 [Pipeline] sh 00:43:31.119 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:31.119 986009 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:31.138 [Pipeline] sh 00:43:31.424 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:31.425 ++ awk '{print $1}' 00:43:31.425 ++ grep -v 'sudo pgrep' 00:43:31.425 + sudo kill -9 00:43:31.425 + true 00:43:31.437 [Pipeline] sh 00:43:31.722 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:43.929 [Pipeline] sh 00:43:44.216 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:44.216 Artifacts sizes are good 00:43:44.232 [Pipeline] archiveArtifacts 00:43:44.239 Archiving artifacts 00:43:44.437 [Pipeline] sh 00:43:44.746 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:43:44.762 [Pipeline] cleanWs 00:43:44.772 [WS-CLEANUP] Deleting project workspace... 00:43:44.772 [WS-CLEANUP] Deferred wipeout is used... 00:43:44.780 [WS-CLEANUP] done 00:43:44.782 [Pipeline] } 00:43:44.798 [Pipeline] // catchError 00:43:44.810 [Pipeline] sh 00:43:45.100 + logger -p user.info -t JENKINS-CI 00:43:45.108 [Pipeline] } 00:43:45.121 [Pipeline] // stage 00:43:45.125 [Pipeline] } 00:43:45.139 [Pipeline] // node 00:43:45.143 [Pipeline] End of Pipeline 00:43:45.182 Finished: SUCCESS